00:00:00.001 Started by upstream project "autotest-per-patch" build number 127089 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.096 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.096 The recommended git tool is: git 00:00:00.096 using credential 00000000-0000-0000-0000-000000000002 00:00:00.098 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.141 Fetching changes from the remote Git repository 00:00:00.143 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.183 Using shallow fetch with depth 1 00:00:00.183 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.183 > git --version # timeout=10 00:00:00.217 > git --version # 'git version 2.39.2' 00:00:00.217 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.232 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.232 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.706 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.716 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.729 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:06.729 > git config core.sparsecheckout # timeout=10 00:00:06.740 > git read-tree -mu HEAD # timeout=10 00:00:06.757 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:06.798 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:06.799 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:06.897 [Pipeline] Start of Pipeline 00:00:06.908 [Pipeline] library 00:00:06.910 Loading library shm_lib@master 00:00:06.910 Library shm_lib@master is cached. Copying from home. 00:00:06.924 [Pipeline] node 00:00:06.932 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.933 [Pipeline] { 00:00:06.942 [Pipeline] catchError 00:00:06.943 [Pipeline] { 00:00:06.958 [Pipeline] wrap 00:00:06.967 [Pipeline] { 00:00:06.974 [Pipeline] stage 00:00:06.976 [Pipeline] { (Prologue) 00:00:07.186 [Pipeline] sh 00:00:07.474 + logger -p user.info -t JENKINS-CI 00:00:07.496 [Pipeline] echo 00:00:07.497 Node: WFP8 00:00:07.504 [Pipeline] sh 00:00:07.808 [Pipeline] setCustomBuildProperty 00:00:07.822 [Pipeline] echo 00:00:07.824 Cleanup processes 00:00:07.830 [Pipeline] sh 00:00:08.125 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.125 1762259 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.140 [Pipeline] sh 00:00:08.426 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.426 ++ grep -v 'sudo pgrep' 00:00:08.426 ++ awk '{print $1}' 00:00:08.426 + sudo kill -9 00:00:08.426 + true 00:00:08.442 [Pipeline] cleanWs 00:00:08.453 [WS-CLEANUP] Deleting project workspace... 00:00:08.453 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.459 [WS-CLEANUP] done 00:00:08.463 [Pipeline] setCustomBuildProperty 00:00:08.480 [Pipeline] sh 00:00:08.766 + sudo git config --global --replace-all safe.directory '*' 00:00:08.849 [Pipeline] httpRequest 00:00:08.870 [Pipeline] echo 00:00:08.871 Sorcerer 10.211.164.101 is alive 00:00:08.880 [Pipeline] httpRequest 00:00:08.885 HttpMethod: GET 00:00:08.885 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.886 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.888 Response Code: HTTP/1.1 200 OK 00:00:08.889 Success: Status code 200 is in the accepted range: 200,404 00:00:08.890 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.863 [Pipeline] sh 00:00:10.150 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.172 [Pipeline] httpRequest 00:00:10.188 [Pipeline] echo 00:00:10.190 Sorcerer 10.211.164.101 is alive 00:00:10.200 [Pipeline] httpRequest 00:00:10.205 HttpMethod: GET 00:00:10.205 URL: http://10.211.164.101/packages/spdk_3bc1795d30f064434535a81a05bbd560c40a398b.tar.gz 00:00:10.206 Sending request to url: http://10.211.164.101/packages/spdk_3bc1795d30f064434535a81a05bbd560c40a398b.tar.gz 00:00:10.226 Response Code: HTTP/1.1 200 OK 00:00:10.226 Success: Status code 200 is in the accepted range: 200,404 00:00:10.227 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_3bc1795d30f064434535a81a05bbd560c40a398b.tar.gz 00:01:08.857 [Pipeline] sh 00:01:09.144 + tar --no-same-owner -xf spdk_3bc1795d30f064434535a81a05bbd560c40a398b.tar.gz 00:01:11.697 [Pipeline] sh 00:01:11.992 + git -C spdk log --oneline -n5 00:01:11.994 3bc1795d3 accel_perf: add support for DIX Generate/Verify 00:01:11.994 0a6bb28fa test/accel/dif: add DIX Generate/Verify suites 00:01:11.994 52c295e65 lib/accel: add DIX verify 00:01:11.994 b5c6fc4f3 lib/accel: add DIX generate 00:01:11.994 8ee2672c4 test/bdev: Add test for resized RAID with superblock 00:01:12.008 [Pipeline] } 00:01:12.018 [Pipeline] // stage 00:01:12.024 [Pipeline] stage 00:01:12.025 [Pipeline] { (Prepare) 00:01:12.036 [Pipeline] writeFile 00:01:12.046 [Pipeline] sh 00:01:12.325 + logger -p user.info -t JENKINS-CI 00:01:12.337 [Pipeline] sh 00:01:12.622 + logger -p user.info -t JENKINS-CI 00:01:12.638 [Pipeline] sh 00:01:12.964 + cat autorun-spdk.conf 00:01:12.964 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.964 SPDK_TEST_NVMF=1 00:01:12.964 SPDK_TEST_NVME_CLI=1 00:01:12.964 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.964 SPDK_TEST_NVMF_NICS=e810 00:01:12.964 SPDK_TEST_VFIOUSER=1 00:01:12.964 SPDK_RUN_UBSAN=1 00:01:12.964 NET_TYPE=phy 00:01:12.973 RUN_NIGHTLY=0 00:01:12.977 [Pipeline] readFile 00:01:13.004 [Pipeline] withEnv 00:01:13.007 [Pipeline] { 00:01:13.022 [Pipeline] sh 00:01:13.308 + set -ex 00:01:13.308 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:13.308 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.308 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.308 ++ SPDK_TEST_NVMF=1 00:01:13.308 ++ SPDK_TEST_NVME_CLI=1 00:01:13.308 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.308 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.308 ++ SPDK_TEST_VFIOUSER=1 00:01:13.308 ++ SPDK_RUN_UBSAN=1 00:01:13.308 ++ NET_TYPE=phy 00:01:13.308 ++ RUN_NIGHTLY=0 00:01:13.308 + case $SPDK_TEST_NVMF_NICS in 00:01:13.308 + DRIVERS=ice 00:01:13.308 + [[ tcp == \r\d\m\a ]] 00:01:13.308 + [[ -n ice ]] 00:01:13.308 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:13.308 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:13.308 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:13.308 rmmod: ERROR: Module irdma is not currently loaded 00:01:13.308 rmmod: ERROR: Module i40iw is not currently loaded 00:01:13.308 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:13.308 + true 00:01:13.308 + for D in $DRIVERS 00:01:13.308 + sudo modprobe ice 00:01:13.308 + exit 0 00:01:13.318 [Pipeline] } 00:01:13.337 [Pipeline] // withEnv 00:01:13.343 [Pipeline] } 00:01:13.358 [Pipeline] // stage 00:01:13.368 [Pipeline] catchError 00:01:13.370 [Pipeline] { 00:01:13.386 [Pipeline] timeout 00:01:13.387 Timeout set to expire in 50 min 00:01:13.388 [Pipeline] { 00:01:13.405 [Pipeline] stage 00:01:13.407 [Pipeline] { (Tests) 00:01:13.424 [Pipeline] sh 00:01:13.710 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.710 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.710 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.710 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:13.710 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.710 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.710 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:13.710 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.710 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.710 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.710 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:13.710 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.710 + source /etc/os-release 00:01:13.710 ++ NAME='Fedora Linux' 00:01:13.710 ++ VERSION='38 (Cloud Edition)' 00:01:13.710 ++ ID=fedora 00:01:13.710 ++ VERSION_ID=38 00:01:13.710 ++ VERSION_CODENAME= 00:01:13.710 ++ PLATFORM_ID=platform:f38 00:01:13.710 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:13.710 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:13.710 ++ LOGO=fedora-logo-icon 00:01:13.710 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:13.710 ++ HOME_URL=https://fedoraproject.org/ 00:01:13.710 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:13.710 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:13.710 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:13.710 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:13.710 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:13.710 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:13.710 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:13.710 ++ SUPPORT_END=2024-05-14 00:01:13.710 ++ VARIANT='Cloud Edition' 00:01:13.710 ++ VARIANT_ID=cloud 00:01:13.710 + uname -a 00:01:13.710 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:13.710 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:15.621 Hugepages 00:01:15.621 node hugesize free / total 00:01:15.621 node0 1048576kB 0 / 0 00:01:15.621 node0 2048kB 0 / 0 00:01:15.621 node1 1048576kB 0 / 0 00:01:15.621 node1 2048kB 0 / 0 00:01:15.621 00:01:15.621 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:15.621 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:15.621 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:15.621 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:15.621 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:15.621 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:15.621 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:15.621 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:15.621 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:15.621 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:15.621 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:15.621 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:15.621 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:15.621 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:15.621 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:15.621 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:15.621 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:15.621 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:15.621 + rm -f /tmp/spdk-ld-path 00:01:15.621 + source autorun-spdk.conf 00:01:15.621 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.621 ++ SPDK_TEST_NVMF=1 00:01:15.621 ++ SPDK_TEST_NVME_CLI=1 00:01:15.621 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.621 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.621 ++ SPDK_TEST_VFIOUSER=1 00:01:15.621 ++ SPDK_RUN_UBSAN=1 00:01:15.622 ++ NET_TYPE=phy 00:01:15.622 ++ RUN_NIGHTLY=0 00:01:15.882 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:15.882 + [[ -n '' ]] 00:01:15.882 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.882 + for M in /var/spdk/build-*-manifest.txt 00:01:15.882 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:15.882 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.882 + for M in /var/spdk/build-*-manifest.txt 00:01:15.882 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:15.882 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.882 ++ uname 00:01:15.882 + [[ Linux == \L\i\n\u\x ]] 00:01:15.882 + sudo dmesg -T 00:01:15.882 + sudo dmesg --clear 00:01:15.882 + dmesg_pid=1763716 00:01:15.882 + [[ Fedora Linux == FreeBSD ]] 00:01:15.882 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.882 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.882 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:15.882 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:15.882 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:15.882 + [[ -x /usr/src/fio-static/fio ]] 00:01:15.882 + sudo dmesg -Tw 00:01:15.882 + export FIO_BIN=/usr/src/fio-static/fio 00:01:15.882 + FIO_BIN=/usr/src/fio-static/fio 00:01:15.882 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:15.882 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:15.882 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:15.882 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.882 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.882 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:15.882 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.882 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.882 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.882 Test configuration: 00:01:15.882 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.882 SPDK_TEST_NVMF=1 00:01:15.882 SPDK_TEST_NVME_CLI=1 00:01:15.882 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.882 SPDK_TEST_NVMF_NICS=e810 00:01:15.882 SPDK_TEST_VFIOUSER=1 00:01:15.882 SPDK_RUN_UBSAN=1 00:01:15.882 NET_TYPE=phy 00:01:15.882 RUN_NIGHTLY=0 19:37:07 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:15.882 19:37:07 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:15.882 19:37:07 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:15.882 19:37:07 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:15.882 19:37:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.882 19:37:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.882 19:37:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.882 19:37:07 -- paths/export.sh@5 -- $ export PATH 00:01:15.882 19:37:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.882 19:37:07 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:15.882 19:37:07 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:15.882 19:37:07 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721842627.XXXXXX 00:01:15.882 19:37:07 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721842627.B2U5Nx 00:01:15.882 19:37:07 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:15.882 19:37:07 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:15.882 19:37:07 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:15.882 19:37:07 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:15.882 19:37:07 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:15.882 19:37:07 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:15.882 19:37:07 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:15.882 19:37:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.882 19:37:07 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:15.882 19:37:07 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:15.882 19:37:07 -- pm/common@17 -- $ local monitor 00:01:15.882 19:37:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.882 19:37:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.882 19:37:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.882 19:37:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.882 19:37:07 -- pm/common@25 -- $ sleep 1 00:01:15.882 19:37:07 -- pm/common@21 -- $ date +%s 00:01:15.882 19:37:07 -- pm/common@21 -- $ date +%s 00:01:15.882 19:37:07 -- pm/common@21 -- $ date +%s 00:01:15.882 19:37:07 -- pm/common@21 -- $ date +%s 00:01:15.882 19:37:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721842627 00:01:15.882 19:37:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721842627 00:01:15.882 19:37:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721842627 00:01:15.882 19:37:07 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721842627 00:01:15.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721842627_collect-vmstat.pm.log 00:01:15.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721842627_collect-cpu-load.pm.log 00:01:15.882 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721842627_collect-cpu-temp.pm.log 00:01:16.142 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721842627_collect-bmc-pm.bmc.pm.log 00:01:17.083 19:37:08 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:17.083 19:37:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:17.083 19:37:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:17.083 19:37:08 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:17.083 19:37:08 -- spdk/autobuild.sh@16 -- $ date -u 00:01:17.083 Wed Jul 24 05:37:08 PM UTC 2024 00:01:17.083 19:37:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:17.083 v24.09-pre-320-g3bc1795d3 00:01:17.083 19:37:08 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:17.083 19:37:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:17.083 19:37:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:17.083 19:37:08 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:17.083 19:37:08 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:17.083 19:37:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.083 ************************************ 00:01:17.083 START TEST ubsan 00:01:17.083 ************************************ 00:01:17.083 19:37:08 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:17.083 using ubsan 00:01:17.083 00:01:17.083 real 0m0.000s 00:01:17.083 user 0m0.000s 00:01:17.083 sys 0m0.000s 00:01:17.083 19:37:08 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:17.083 19:37:08 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:17.083 ************************************ 00:01:17.083 END TEST ubsan 00:01:17.083 ************************************ 00:01:17.083 19:37:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:17.083 19:37:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:17.083 19:37:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:17.083 19:37:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:17.083 19:37:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:17.083 19:37:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:17.083 19:37:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:17.083 19:37:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:17.083 19:37:08 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:17.083 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:17.083 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:17.342 Using 'verbs' RDMA provider 00:01:30.503 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:40.497 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:41.068 Creating mk/config.mk...done. 00:01:41.068 Creating mk/cc.flags.mk...done. 00:01:41.068 Type 'make' to build. 00:01:41.068 19:37:32 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:41.068 19:37:32 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:41.068 19:37:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:41.068 19:37:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.068 ************************************ 00:01:41.068 START TEST make 00:01:41.068 ************************************ 00:01:41.068 19:37:32 make -- common/autotest_common.sh@1125 -- $ make -j96 00:01:41.327 make[1]: Nothing to be done for 'all'. 00:01:42.346 The Meson build system 00:01:42.346 Version: 1.3.1 00:01:42.346 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:42.346 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:42.346 Build type: native build 00:01:42.346 Project name: libvfio-user 00:01:42.346 Project version: 0.0.1 00:01:42.346 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:42.346 C linker for the host machine: cc ld.bfd 2.39-16 00:01:42.346 Host machine cpu family: x86_64 00:01:42.346 Host machine cpu: x86_64 00:01:42.346 Run-time dependency threads found: YES 00:01:42.346 Library dl found: YES 00:01:42.346 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:42.346 Run-time dependency json-c found: YES 0.17 00:01:42.346 Run-time dependency cmocka found: YES 1.1.7 00:01:42.346 Program pytest-3 found: NO 00:01:42.346 Program flake8 found: NO 00:01:42.346 Program misspell-fixer found: NO 00:01:42.346 Program restructuredtext-lint found: NO 00:01:42.346 Program valgrind found: YES (/usr/bin/valgrind) 00:01:42.346 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:42.346 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:42.346 Compiler for C supports arguments -Wwrite-strings: YES 00:01:42.346 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:42.346 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:42.347 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:42.347 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:42.347 Build targets in project: 8 00:01:42.347 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:42.347 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:42.347 00:01:42.347 libvfio-user 0.0.1 00:01:42.347 00:01:42.347 User defined options 00:01:42.347 buildtype : debug 00:01:42.347 default_library: shared 00:01:42.347 libdir : /usr/local/lib 00:01:42.347 00:01:42.347 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:42.913 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:43.172 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:43.172 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:43.172 [3/37] Compiling C object samples/null.p/null.c.o 00:01:43.172 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:43.172 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:43.172 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:43.172 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:43.172 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:43.172 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:43.172 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:43.172 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:43.172 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:43.172 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:43.172 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:43.172 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:43.172 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:43.172 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:43.172 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:43.172 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:43.172 [20/37] Compiling C object samples/server.p/server.c.o 00:01:43.172 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:43.172 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:43.172 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:43.172 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:43.172 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:43.172 [26/37] Compiling C object samples/client.p/client.c.o 00:01:43.172 [27/37] Linking target samples/client 00:01:43.431 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:43.431 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:43.431 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:43.431 [31/37] Linking target test/unit_tests 00:01:43.431 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:43.431 [33/37] Linking target samples/null 00:01:43.431 [34/37] Linking target samples/server 00:01:43.431 [35/37] Linking target samples/gpio-pci-idio-16 00:01:43.431 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:43.431 [37/37] Linking target samples/lspci 00:01:43.431 INFO: autodetecting backend as ninja 00:01:43.431 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.431 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.999 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:43.999 ninja: no work to do. 00:01:49.276 The Meson build system 00:01:49.276 Version: 1.3.1 00:01:49.276 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:49.276 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:49.276 Build type: native build 00:01:49.276 Program cat found: YES (/usr/bin/cat) 00:01:49.276 Project name: DPDK 00:01:49.276 Project version: 24.03.0 00:01:49.276 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:49.276 C linker for the host machine: cc ld.bfd 2.39-16 00:01:49.276 Host machine cpu family: x86_64 00:01:49.276 Host machine cpu: x86_64 00:01:49.276 Message: ## Building in Developer Mode ## 00:01:49.276 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:49.276 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:49.276 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:49.276 Program python3 found: YES (/usr/bin/python3) 00:01:49.276 Program cat found: YES (/usr/bin/cat) 00:01:49.276 Compiler for C supports arguments -march=native: YES 00:01:49.276 Checking for size of "void *" : 8 00:01:49.276 Checking for size of "void *" : 8 (cached) 00:01:49.276 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:49.276 Library m found: YES 00:01:49.276 Library numa found: YES 00:01:49.276 Has header "numaif.h" : YES 00:01:49.276 Library fdt found: NO 00:01:49.276 Library execinfo found: NO 00:01:49.276 Has header "execinfo.h" : YES 00:01:49.276 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:49.276 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:49.276 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:49.276 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:49.276 Run-time dependency openssl found: YES 3.0.9 00:01:49.276 Run-time dependency libpcap found: YES 1.10.4 00:01:49.276 Has header "pcap.h" with dependency libpcap: YES 00:01:49.276 Compiler for C supports arguments -Wcast-qual: YES 00:01:49.276 Compiler for C supports arguments -Wdeprecated: YES 00:01:49.276 Compiler for C supports arguments -Wformat: YES 00:01:49.276 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:49.276 Compiler for C supports arguments -Wformat-security: NO 00:01:49.276 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:49.276 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:49.276 Compiler for C supports arguments -Wnested-externs: YES 00:01:49.276 Compiler for C supports arguments -Wold-style-definition: YES 00:01:49.276 Compiler for C supports arguments -Wpointer-arith: YES 00:01:49.276 Compiler for C supports arguments -Wsign-compare: YES 00:01:49.276 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:49.276 Compiler for C supports arguments -Wundef: YES 00:01:49.276 Compiler for C supports arguments -Wwrite-strings: YES 00:01:49.276 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:49.276 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:49.276 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:49.276 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:49.276 Program objdump found: YES (/usr/bin/objdump) 00:01:49.276 Compiler for C supports arguments -mavx512f: YES 00:01:49.276 Checking if "AVX512 checking" compiles: YES 00:01:49.276 Fetching value of define "__SSE4_2__" : 1 00:01:49.276 Fetching value of define "__AES__" : 1 00:01:49.276 Fetching value of define "__AVX__" : 1 00:01:49.276 Fetching value of define "__AVX2__" : 1 00:01:49.276 Fetching value of define "__AVX512BW__" : 1 00:01:49.276 Fetching value of define "__AVX512CD__" : 1 00:01:49.276 Fetching value of define "__AVX512DQ__" : 1 00:01:49.276 Fetching value of define "__AVX512F__" : 1 00:01:49.276 Fetching value of define "__AVX512VL__" : 1 00:01:49.276 Fetching value of define "__PCLMUL__" : 1 00:01:49.276 Fetching value of define "__RDRND__" : 1 00:01:49.276 Fetching value of define "__RDSEED__" : 1 00:01:49.276 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:49.276 Fetching value of define "__znver1__" : (undefined) 00:01:49.276 Fetching value of define "__znver2__" : (undefined) 00:01:49.276 Fetching value of define "__znver3__" : (undefined) 00:01:49.276 Fetching value of define "__znver4__" : (undefined) 00:01:49.276 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:49.276 Message: lib/log: Defining dependency "log" 00:01:49.276 Message: lib/kvargs: Defining dependency "kvargs" 00:01:49.276 Message: lib/telemetry: Defining dependency "telemetry" 00:01:49.276 Checking for function "getentropy" : NO 00:01:49.276 Message: lib/eal: Defining dependency "eal" 00:01:49.276 Message: lib/ring: Defining dependency "ring" 00:01:49.276 Message: lib/rcu: Defining dependency "rcu" 00:01:49.276 Message: lib/mempool: Defining dependency "mempool" 00:01:49.276 Message: lib/mbuf: Defining dependency "mbuf" 00:01:49.276 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:49.276 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:49.276 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:49.276 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:49.276 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:49.276 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:49.276 Compiler for C supports arguments -mpclmul: YES 00:01:49.276 Compiler for C supports arguments -maes: YES 00:01:49.276 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:49.276 Compiler for C supports arguments -mavx512bw: YES 00:01:49.276 Compiler for C supports arguments -mavx512dq: YES 00:01:49.276 Compiler for C supports arguments -mavx512vl: YES 00:01:49.276 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:49.276 Compiler for C supports arguments -mavx2: YES 00:01:49.276 Compiler for C supports arguments -mavx: YES 00:01:49.276 Message: lib/net: Defining dependency "net" 00:01:49.277 Message: lib/meter: Defining dependency "meter" 00:01:49.277 Message: lib/ethdev: Defining dependency "ethdev" 00:01:49.277 Message: lib/pci: Defining dependency "pci" 00:01:49.277 Message: lib/cmdline: Defining dependency "cmdline" 00:01:49.277 Message: lib/hash: Defining dependency "hash" 00:01:49.277 Message: lib/timer: Defining dependency "timer" 00:01:49.277 Message: lib/compressdev: Defining dependency "compressdev" 00:01:49.277 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:49.277 Message: lib/dmadev: Defining dependency "dmadev" 00:01:49.277 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:49.277 Message: lib/power: Defining dependency "power" 00:01:49.277 Message: lib/reorder: Defining dependency "reorder" 00:01:49.277 Message: lib/security: Defining dependency "security" 00:01:49.277 Has header "linux/userfaultfd.h" : YES 00:01:49.277 Has header "linux/vduse.h" : YES 00:01:49.277 Message: lib/vhost: Defining dependency "vhost" 00:01:49.277 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:49.277 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:49.277 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:49.277 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:49.277 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:49.277 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:49.277 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:49.277 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:49.277 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:49.277 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:49.277 Program doxygen found: YES (/usr/bin/doxygen) 00:01:49.277 Configuring doxy-api-html.conf using configuration 00:01:49.277 Configuring doxy-api-man.conf using configuration 00:01:49.277 Program mandb found: YES (/usr/bin/mandb) 00:01:49.277 Program sphinx-build found: NO 00:01:49.277 Configuring rte_build_config.h using configuration 00:01:49.277 Message: 00:01:49.277 ================= 00:01:49.277 Applications Enabled 00:01:49.277 ================= 00:01:49.277 00:01:49.277 apps: 00:01:49.277 00:01:49.277 00:01:49.277 Message: 00:01:49.277 ================= 00:01:49.277 Libraries Enabled 00:01:49.277 ================= 00:01:49.277 00:01:49.277 libs: 00:01:49.277 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:49.277 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:49.277 cryptodev, dmadev, power, reorder, security, vhost, 00:01:49.277 00:01:49.277 Message: 00:01:49.277 =============== 00:01:49.277 Drivers Enabled 00:01:49.277 =============== 00:01:49.277 00:01:49.277 common: 00:01:49.277 00:01:49.277 bus: 00:01:49.277 pci, vdev, 00:01:49.277 mempool: 00:01:49.277 ring, 00:01:49.277 dma: 00:01:49.277 00:01:49.277 net: 00:01:49.277 00:01:49.277 crypto: 00:01:49.277 00:01:49.277 compress: 00:01:49.277 00:01:49.277 vdpa: 00:01:49.277 00:01:49.277 00:01:49.277 Message: 00:01:49.277 ================= 00:01:49.277 Content Skipped 00:01:49.277 ================= 00:01:49.277 00:01:49.277 apps: 00:01:49.277 dumpcap: explicitly disabled via build config 00:01:49.277 graph: explicitly disabled via build config 00:01:49.277 pdump: explicitly disabled via build config 00:01:49.277 proc-info: explicitly disabled via build config 00:01:49.277 test-acl: explicitly disabled via build config 00:01:49.277 test-bbdev: explicitly disabled via build config 00:01:49.277 test-cmdline: explicitly disabled via build config 00:01:49.277 test-compress-perf: explicitly disabled via build config 00:01:49.277 test-crypto-perf: explicitly disabled via build config 00:01:49.277 test-dma-perf: explicitly disabled via build config 00:01:49.277 test-eventdev: explicitly disabled via build config 00:01:49.277 test-fib: explicitly disabled via build config 00:01:49.277 test-flow-perf: explicitly disabled via build config 00:01:49.277 test-gpudev: explicitly disabled via build config 00:01:49.277 test-mldev: explicitly disabled via build config 00:01:49.277 test-pipeline: explicitly disabled via build config 00:01:49.277 test-pmd: explicitly disabled via build config 00:01:49.277 test-regex: explicitly disabled via build config 00:01:49.277 test-sad: explicitly disabled via build config 00:01:49.277 test-security-perf: explicitly disabled via build config 00:01:49.277 00:01:49.277 libs: 00:01:49.277 argparse: explicitly disabled via build config 00:01:49.277 metrics: explicitly disabled via build config 00:01:49.277 acl: explicitly disabled via build config 00:01:49.277 bbdev: explicitly disabled via build config 00:01:49.277 bitratestats: explicitly disabled via build config 00:01:49.277 bpf: explicitly disabled via build config 00:01:49.277 cfgfile: explicitly disabled via build config 00:01:49.277 distributor: explicitly disabled via build config 00:01:49.277 efd: explicitly disabled via build config 00:01:49.277 eventdev: explicitly disabled via build config 00:01:49.277 dispatcher: explicitly disabled via build config 00:01:49.277 gpudev: explicitly disabled via build config 00:01:49.277 gro: explicitly disabled via build config 00:01:49.277 gso: explicitly disabled via build config 00:01:49.277 ip_frag: explicitly disabled via build config 00:01:49.277 jobstats: explicitly disabled via build config 00:01:49.277 latencystats: explicitly disabled via build config 00:01:49.277 lpm: explicitly disabled via build config 00:01:49.277 member: explicitly disabled via build config 00:01:49.277 pcapng: explicitly disabled via build config 00:01:49.277 rawdev: explicitly disabled via build config 00:01:49.277 regexdev: explicitly disabled via build config 00:01:49.277 mldev: explicitly disabled via build config 00:01:49.277 rib: explicitly disabled via build config 00:01:49.277 sched: explicitly disabled via build config 00:01:49.277 stack: explicitly disabled via build config 00:01:49.277 ipsec: explicitly disabled via build config 00:01:49.277 pdcp: explicitly disabled via build config 00:01:49.277 fib: explicitly disabled via build config 00:01:49.277 port: explicitly disabled via build config 00:01:49.277 pdump: explicitly disabled via build config 00:01:49.277 table: explicitly disabled via build config 00:01:49.277 pipeline: explicitly disabled via build config 00:01:49.277 graph: explicitly disabled via build config 00:01:49.277 node: explicitly disabled via build config 00:01:49.277 00:01:49.277 drivers: 00:01:49.277 common/cpt: not in enabled drivers build config 00:01:49.277 common/dpaax: not in enabled drivers build config 00:01:49.277 common/iavf: not in enabled drivers build config 00:01:49.277 common/idpf: not in enabled drivers build config 00:01:49.277 common/ionic: not in enabled drivers build config 00:01:49.277 common/mvep: not in enabled drivers build config 00:01:49.277 common/octeontx: not in enabled drivers build config 00:01:49.277 bus/auxiliary: not in enabled drivers build config 00:01:49.277 bus/cdx: not in enabled drivers build config 00:01:49.277 bus/dpaa: not in enabled drivers build config 00:01:49.277 bus/fslmc: not in enabled drivers build config 00:01:49.277 bus/ifpga: not in enabled drivers build config 00:01:49.277 bus/platform: not in enabled drivers build config 00:01:49.277 bus/uacce: not in enabled drivers build config 00:01:49.277 bus/vmbus: not in enabled drivers build config 00:01:49.277 common/cnxk: not in enabled drivers build config 00:01:49.277 common/mlx5: not in enabled drivers build config 00:01:49.277 common/nfp: not in enabled drivers build config 00:01:49.277 common/nitrox: not in enabled drivers build config 00:01:49.277 common/qat: not in enabled drivers build config 00:01:49.277 common/sfc_efx: not in enabled drivers build config 00:01:49.277 mempool/bucket: not in enabled drivers build config 00:01:49.277 mempool/cnxk: not in enabled drivers build config 00:01:49.277 mempool/dpaa: not in enabled drivers build config 00:01:49.277 mempool/dpaa2: not in enabled drivers build config 00:01:49.277 mempool/octeontx: not in enabled drivers build config 00:01:49.277 mempool/stack: not in enabled drivers build config 00:01:49.277 dma/cnxk: not in enabled drivers build config 00:01:49.277 dma/dpaa: not in enabled drivers build config 00:01:49.277 dma/dpaa2: not in enabled drivers build config 00:01:49.277 dma/hisilicon: not in enabled drivers build config 00:01:49.277 dma/idxd: not in enabled drivers build config 00:01:49.277 dma/ioat: not in enabled drivers build config 00:01:49.277 dma/skeleton: not in enabled drivers build config 00:01:49.277 net/af_packet: not in enabled drivers build config 00:01:49.277 net/af_xdp: not in enabled drivers build config 00:01:49.277 net/ark: not in enabled drivers build config 00:01:49.277 net/atlantic: not in enabled drivers build config 00:01:49.277 net/avp: not in enabled drivers build config 00:01:49.277 net/axgbe: not in enabled drivers build config 00:01:49.277 net/bnx2x: not in enabled drivers build config 00:01:49.277 net/bnxt: not in enabled drivers build config 00:01:49.277 net/bonding: not in enabled drivers build config 00:01:49.277 net/cnxk: not in enabled drivers build config 00:01:49.277 net/cpfl: not in enabled drivers build config 00:01:49.277 net/cxgbe: not in enabled drivers build config 00:01:49.277 net/dpaa: not in enabled drivers build config 00:01:49.277 net/dpaa2: not in enabled drivers build config 00:01:49.277 net/e1000: not in enabled drivers build config 00:01:49.277 net/ena: not in enabled drivers build config 00:01:49.277 net/enetc: not in enabled drivers build config 00:01:49.277 net/enetfec: not in enabled drivers build config 00:01:49.277 net/enic: not in enabled drivers build config 00:01:49.277 net/failsafe: not in enabled drivers build config 00:01:49.277 net/fm10k: not in enabled drivers build config 00:01:49.277 net/gve: not in enabled drivers build config 00:01:49.277 net/hinic: not in enabled drivers build config 00:01:49.277 net/hns3: not in enabled drivers build config 00:01:49.277 net/i40e: not in enabled drivers build config 00:01:49.277 net/iavf: not in enabled drivers build config 00:01:49.277 net/ice: not in enabled drivers build config 00:01:49.277 net/idpf: not in enabled drivers build config 00:01:49.277 net/igc: not in enabled drivers build config 00:01:49.278 net/ionic: not in enabled drivers build config 00:01:49.278 net/ipn3ke: not in enabled drivers build config 00:01:49.278 net/ixgbe: not in enabled drivers build config 00:01:49.278 net/mana: not in enabled drivers build config 00:01:49.278 net/memif: not in enabled drivers build config 00:01:49.278 net/mlx4: not in enabled drivers build config 00:01:49.278 net/mlx5: not in enabled drivers build config 00:01:49.278 net/mvneta: not in enabled drivers build config 00:01:49.278 net/mvpp2: not in enabled drivers build config 00:01:49.278 net/netvsc: not in enabled drivers build config 00:01:49.278 net/nfb: not in enabled drivers build config 00:01:49.278 net/nfp: not in enabled drivers build config 00:01:49.278 net/ngbe: not in enabled drivers build config 00:01:49.278 net/null: not in enabled drivers build config 00:01:49.278 net/octeontx: not in enabled drivers build config 00:01:49.278 net/octeon_ep: not in enabled drivers build config 00:01:49.278 net/pcap: not in enabled drivers build config 00:01:49.278 net/pfe: not in enabled drivers build config 00:01:49.278 net/qede: not in enabled drivers build config 00:01:49.278 net/ring: not in enabled drivers build config 00:01:49.278 net/sfc: not in enabled drivers build config 00:01:49.278 net/softnic: not in enabled drivers build config 00:01:49.278 net/tap: not in enabled drivers build config 00:01:49.278 net/thunderx: not in enabled drivers build config 00:01:49.278 net/txgbe: not in enabled drivers build config 00:01:49.278 net/vdev_netvsc: not in enabled drivers build config 00:01:49.278 net/vhost: not in enabled drivers build config 00:01:49.278 net/virtio: not in enabled drivers build config 00:01:49.278 net/vmxnet3: not in enabled drivers build config 00:01:49.278 raw/*: missing internal dependency, "rawdev" 00:01:49.278 crypto/armv8: not in enabled drivers build config 00:01:49.278 crypto/bcmfs: not in enabled drivers build config 00:01:49.278 crypto/caam_jr: not in enabled drivers build config 00:01:49.278 crypto/ccp: not in enabled drivers build config 00:01:49.278 crypto/cnxk: not in enabled drivers build config 00:01:49.278 crypto/dpaa_sec: not in enabled drivers build config 00:01:49.278 crypto/dpaa2_sec: not in enabled drivers build config 00:01:49.278 crypto/ipsec_mb: not in enabled drivers build config 00:01:49.278 crypto/mlx5: not in enabled drivers build config 00:01:49.278 crypto/mvsam: not in enabled drivers build config 00:01:49.278 crypto/nitrox: not in enabled drivers build config 00:01:49.278 crypto/null: not in enabled drivers build config 00:01:49.278 crypto/octeontx: not in enabled drivers build config 00:01:49.278 crypto/openssl: not in enabled drivers build config 00:01:49.278 crypto/scheduler: not in enabled drivers build config 00:01:49.278 crypto/uadk: not in enabled drivers build config 00:01:49.278 crypto/virtio: not in enabled drivers build config 00:01:49.278 compress/isal: not in enabled drivers build config 00:01:49.278 compress/mlx5: not in enabled drivers build config 00:01:49.278 compress/nitrox: not in enabled drivers build config 00:01:49.278 compress/octeontx: not in enabled drivers build config 00:01:49.278 compress/zlib: not in enabled drivers build config 00:01:49.278 regex/*: missing internal dependency, "regexdev" 00:01:49.278 ml/*: missing internal dependency, "mldev" 00:01:49.278 vdpa/ifc: not in enabled drivers build config 00:01:49.278 vdpa/mlx5: not in enabled drivers build config 00:01:49.278 vdpa/nfp: not in enabled drivers build config 00:01:49.278 vdpa/sfc: not in enabled drivers build config 00:01:49.278 event/*: missing internal dependency, "eventdev" 00:01:49.278 baseband/*: missing internal dependency, "bbdev" 00:01:49.278 gpu/*: missing internal dependency, "gpudev" 00:01:49.278 00:01:49.278 00:01:49.278 Build targets in project: 85 00:01:49.278 00:01:49.278 DPDK 24.03.0 00:01:49.278 00:01:49.278 User defined options 00:01:49.278 buildtype : debug 00:01:49.278 default_library : shared 00:01:49.278 libdir : lib 00:01:49.278 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:49.278 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:49.278 c_link_args : 00:01:49.278 cpu_instruction_set: native 00:01:49.278 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:49.278 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:49.278 enable_docs : false 00:01:49.278 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:49.278 enable_kmods : false 00:01:49.278 max_lcores : 128 00:01:49.278 tests : false 00:01:49.278 00:01:49.278 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:49.551 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:49.551 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:49.551 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:49.819 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:49.819 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:49.819 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:49.819 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:49.819 [7/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:49.819 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:49.819 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:49.819 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:49.819 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:49.819 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:49.819 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:49.819 [14/268] Linking static target lib/librte_kvargs.a 00:01:49.819 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:49.819 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:49.819 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:49.819 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:49.819 [19/268] Linking static target lib/librte_log.a 00:01:49.819 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:49.819 [21/268] Linking static target lib/librte_pci.a 00:01:49.819 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:49.819 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:50.079 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:50.079 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:50.079 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:50.079 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:50.079 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:50.079 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:50.079 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:50.079 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:50.079 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:50.079 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:50.079 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:50.079 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:50.079 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:50.079 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:50.079 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:50.079 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:50.079 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:50.079 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:50.079 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:50.079 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:50.079 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:50.079 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:50.079 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:50.079 [47/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:50.079 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:50.079 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:50.079 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:50.079 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:50.079 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:50.079 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:50.079 [54/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:50.339 [55/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:50.339 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:50.339 [57/268] Linking static target lib/librte_meter.a 00:01:50.339 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:50.339 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:50.339 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:50.339 [61/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:50.339 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:50.339 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:50.339 [64/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:50.339 [65/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:50.339 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:50.339 [67/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:50.339 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:50.339 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:50.339 [70/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:50.339 [71/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:50.339 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:50.339 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:50.339 [74/268] Linking static target lib/librte_ring.a 00:01:50.339 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:50.339 [76/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:50.339 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:50.339 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:50.339 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:50.339 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:50.339 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:50.339 [82/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:50.339 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:50.339 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:50.339 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:50.339 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:50.339 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:50.339 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:50.339 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:50.339 [90/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:50.339 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:50.339 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:50.339 [93/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.339 [94/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:50.339 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:50.339 [96/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:50.339 [97/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.339 [98/268] Linking static target lib/librte_telemetry.a 00:01:50.339 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:50.339 [100/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:50.339 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:50.339 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:50.339 [103/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:50.339 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:50.339 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:50.339 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:50.339 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:50.339 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:50.339 [109/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:50.339 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:50.339 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:50.339 [112/268] Linking static target lib/librte_net.a 00:01:50.339 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:50.339 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:50.339 [115/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:50.339 [116/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:50.339 [117/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:50.339 [118/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:50.339 [119/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:50.339 [120/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:50.339 [121/268] Linking static target lib/librte_rcu.a 00:01:50.339 [122/268] Linking static target lib/librte_mempool.a 00:01:50.339 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:50.339 [124/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:50.339 [125/268] Linking static target lib/librte_eal.a 00:01:50.339 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:50.339 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:50.598 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:50.598 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:50.598 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:50.598 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:50.598 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:50.598 [133/268] Linking static target lib/librte_cmdline.a 00:01:50.598 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.598 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:50.598 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.598 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:50.598 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:50.598 [139/268] Linking target lib/librte_log.so.24.1 00:01:50.598 [140/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.598 [141/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:50.598 [142/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:50.598 [143/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.598 [144/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:50.598 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:50.598 [146/268] Linking static target lib/librte_mbuf.a 00:01:50.598 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:50.598 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:50.598 [149/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:50.598 [150/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.598 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:50.598 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:50.598 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:50.598 [154/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:50.598 [155/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.598 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:50.598 [157/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:50.598 [158/268] Linking target lib/librte_kvargs.so.24.1 00:01:50.598 [159/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:50.598 [160/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:50.598 [161/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:50.857 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:50.857 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:50.858 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:50.858 [165/268] Linking target lib/librte_telemetry.so.24.1 00:01:50.858 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:50.858 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:50.858 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:50.858 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:50.858 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:50.858 [171/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:50.858 [172/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:50.858 [173/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:50.858 [174/268] Linking static target lib/librte_power.a 00:01:50.858 [175/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:50.858 [176/268] Linking static target lib/librte_timer.a 00:01:50.858 [177/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:50.858 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:50.858 [179/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:50.858 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:50.858 [181/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:50.858 [182/268] Linking static target lib/librte_dmadev.a 00:01:50.858 [183/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:50.858 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:50.858 [185/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:50.858 [186/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:50.858 [187/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.858 [188/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.858 [189/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:50.858 [190/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:50.858 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:50.858 [192/268] Linking static target drivers/librte_bus_vdev.a 00:01:50.858 [193/268] Linking static target lib/librte_compressdev.a 00:01:50.858 [194/268] Linking static target lib/librte_reorder.a 00:01:50.858 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:50.858 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:50.858 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:50.858 [198/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:50.858 [199/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:50.858 [200/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:50.858 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:50.858 [202/268] Linking static target lib/librte_hash.a 00:01:50.858 [203/268] Linking static target lib/librte_security.a 00:01:51.118 [204/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:51.118 [205/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:51.118 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:51.118 [207/268] Linking static target drivers/librte_mempool_ring.a 00:01:51.118 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:51.118 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:51.118 [210/268] Linking static target drivers/librte_bus_pci.a 00:01:51.118 [211/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.118 [212/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:51.118 [213/268] Linking static target lib/librte_cryptodev.a 00:01:51.118 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.376 [215/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.376 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.376 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:51.376 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.376 [219/268] Linking static target lib/librte_ethdev.a 00:01:51.377 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.635 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.635 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.635 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:51.635 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.635 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.900 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.900 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.480 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:52.480 [229/268] Linking static target lib/librte_vhost.a 00:01:53.049 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.426 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.703 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.270 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.270 [234/268] Linking target lib/librte_eal.so.24.1 00:02:00.270 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:00.530 [236/268] Linking target lib/librte_pci.so.24.1 00:02:00.530 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:00.530 [238/268] Linking target lib/librte_meter.so.24.1 00:02:00.530 [239/268] Linking target lib/librte_timer.so.24.1 00:02:00.530 [240/268] Linking target lib/librte_ring.so.24.1 00:02:00.530 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:00.530 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:00.530 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:00.530 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:00.530 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:00.530 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:00.530 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:00.530 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:00.530 [249/268] Linking target lib/librte_rcu.so.24.1 00:02:00.791 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:00.791 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:00.791 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:00.791 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:00.791 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:01.052 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:01.052 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:01.052 [257/268] Linking target lib/librte_net.so.24.1 00:02:01.052 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:01.052 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:01.052 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:01.052 [261/268] Linking target lib/librte_hash.so.24.1 00:02:01.052 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:01.052 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:01.052 [264/268] Linking target lib/librte_security.so.24.1 00:02:01.312 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:01.312 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:01.312 [267/268] Linking target lib/librte_power.so.24.1 00:02:01.312 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:01.312 INFO: autodetecting backend as ninja 00:02:01.312 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:02.248 CC lib/ut/ut.o 00:02:02.248 CC lib/ut_mock/mock.o 00:02:02.248 CC lib/log/log.o 00:02:02.248 CC lib/log/log_deprecated.o 00:02:02.248 CC lib/log/log_flags.o 00:02:02.507 LIB libspdk_ut_mock.a 00:02:02.507 LIB libspdk_ut.a 00:02:02.507 SO libspdk_ut.so.2.0 00:02:02.507 SO libspdk_ut_mock.so.6.0 00:02:02.507 LIB libspdk_log.a 00:02:02.507 SO libspdk_log.so.7.0 00:02:02.507 SYMLINK libspdk_ut.so 00:02:02.507 SYMLINK libspdk_ut_mock.so 00:02:02.507 SYMLINK libspdk_log.so 00:02:02.816 CC lib/ioat/ioat.o 00:02:02.816 CXX lib/trace_parser/trace.o 00:02:02.816 CC lib/util/base64.o 00:02:02.816 CC lib/dma/dma.o 00:02:02.816 CC lib/util/bit_array.o 00:02:02.816 CC lib/util/cpuset.o 00:02:02.816 CC lib/util/crc16.o 00:02:02.816 CC lib/util/crc32.o 00:02:02.816 CC lib/util/crc32_ieee.o 00:02:02.816 CC lib/util/crc32c.o 00:02:02.816 CC lib/util/crc64.o 00:02:02.816 CC lib/util/dif.o 00:02:02.816 CC lib/util/fd.o 00:02:02.816 CC lib/util/fd_group.o 00:02:02.816 CC lib/util/file.o 00:02:02.816 CC lib/util/hexlify.o 00:02:02.816 CC lib/util/iov.o 00:02:02.816 CC lib/util/math.o 00:02:02.816 CC lib/util/net.o 00:02:02.816 CC lib/util/pipe.o 00:02:02.816 CC lib/util/strerror_tls.o 00:02:02.816 CC lib/util/string.o 00:02:02.816 CC lib/util/uuid.o 00:02:02.816 CC lib/util/xor.o 00:02:02.816 CC lib/util/zipf.o 00:02:03.075 CC lib/vfio_user/host/vfio_user_pci.o 00:02:03.075 CC lib/vfio_user/host/vfio_user.o 00:02:03.075 LIB libspdk_dma.a 00:02:03.075 SO libspdk_dma.so.4.0 00:02:03.075 LIB libspdk_ioat.a 00:02:03.075 SO libspdk_ioat.so.7.0 00:02:03.075 SYMLINK libspdk_dma.so 00:02:03.075 SYMLINK libspdk_ioat.so 00:02:03.334 LIB libspdk_vfio_user.a 00:02:03.334 SO libspdk_vfio_user.so.5.0 00:02:03.334 LIB libspdk_util.a 00:02:03.334 SYMLINK libspdk_vfio_user.so 00:02:03.334 SO libspdk_util.so.10.0 00:02:03.593 SYMLINK libspdk_util.so 00:02:03.593 LIB libspdk_trace_parser.a 00:02:03.593 SO libspdk_trace_parser.so.5.0 00:02:03.593 SYMLINK libspdk_trace_parser.so 00:02:03.851 CC lib/rdma_utils/rdma_utils.o 00:02:03.851 CC lib/env_dpdk/memory.o 00:02:03.851 CC lib/env_dpdk/env.o 00:02:03.851 CC lib/idxd/idxd.o 00:02:03.851 CC lib/env_dpdk/init.o 00:02:03.851 CC lib/env_dpdk/pci.o 00:02:03.851 CC lib/idxd/idxd_user.o 00:02:03.851 CC lib/env_dpdk/threads.o 00:02:03.851 CC lib/idxd/idxd_kernel.o 00:02:03.851 CC lib/env_dpdk/pci_ioat.o 00:02:03.851 CC lib/env_dpdk/pci_virtio.o 00:02:03.851 CC lib/env_dpdk/pci_vmd.o 00:02:03.851 CC lib/env_dpdk/pci_idxd.o 00:02:03.851 CC lib/env_dpdk/pci_event.o 00:02:03.851 CC lib/env_dpdk/sigbus_handler.o 00:02:03.851 CC lib/env_dpdk/pci_dpdk.o 00:02:03.851 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:03.851 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:03.851 CC lib/conf/conf.o 00:02:03.851 CC lib/json/json_parse.o 00:02:03.851 CC lib/json/json_util.o 00:02:03.851 CC lib/json/json_write.o 00:02:03.851 CC lib/rdma_provider/common.o 00:02:03.851 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:03.851 CC lib/vmd/led.o 00:02:03.851 CC lib/vmd/vmd.o 00:02:04.111 LIB libspdk_rdma_provider.a 00:02:04.111 LIB libspdk_rdma_utils.a 00:02:04.111 LIB libspdk_conf.a 00:02:04.111 SO libspdk_rdma_provider.so.6.0 00:02:04.111 SO libspdk_rdma_utils.so.1.0 00:02:04.111 SO libspdk_conf.so.6.0 00:02:04.111 LIB libspdk_json.a 00:02:04.111 SYMLINK libspdk_rdma_utils.so 00:02:04.111 SYMLINK libspdk_rdma_provider.so 00:02:04.111 SYMLINK libspdk_conf.so 00:02:04.111 SO libspdk_json.so.6.0 00:02:04.111 SYMLINK libspdk_json.so 00:02:04.370 LIB libspdk_idxd.a 00:02:04.370 SO libspdk_idxd.so.12.0 00:02:04.370 LIB libspdk_vmd.a 00:02:04.370 SYMLINK libspdk_idxd.so 00:02:04.370 SO libspdk_vmd.so.6.0 00:02:04.370 SYMLINK libspdk_vmd.so 00:02:04.370 CC lib/jsonrpc/jsonrpc_server.o 00:02:04.370 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:04.370 CC lib/jsonrpc/jsonrpc_client.o 00:02:04.370 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:04.629 LIB libspdk_jsonrpc.a 00:02:04.629 SO libspdk_jsonrpc.so.6.0 00:02:04.889 SYMLINK libspdk_jsonrpc.so 00:02:04.889 LIB libspdk_env_dpdk.a 00:02:04.889 SO libspdk_env_dpdk.so.15.0 00:02:04.889 SYMLINK libspdk_env_dpdk.so 00:02:05.148 CC lib/rpc/rpc.o 00:02:05.408 LIB libspdk_rpc.a 00:02:05.408 SO libspdk_rpc.so.6.0 00:02:05.408 SYMLINK libspdk_rpc.so 00:02:05.666 CC lib/keyring/keyring_rpc.o 00:02:05.666 CC lib/keyring/keyring.o 00:02:05.666 CC lib/trace/trace.o 00:02:05.666 CC lib/trace/trace_flags.o 00:02:05.666 CC lib/trace/trace_rpc.o 00:02:05.666 CC lib/notify/notify.o 00:02:05.666 CC lib/notify/notify_rpc.o 00:02:05.925 LIB libspdk_keyring.a 00:02:05.925 LIB libspdk_notify.a 00:02:05.925 SO libspdk_keyring.so.1.0 00:02:05.925 SO libspdk_notify.so.6.0 00:02:05.925 LIB libspdk_trace.a 00:02:05.925 SYMLINK libspdk_keyring.so 00:02:05.925 SO libspdk_trace.so.10.0 00:02:05.925 SYMLINK libspdk_notify.so 00:02:05.925 SYMLINK libspdk_trace.so 00:02:06.183 CC lib/sock/sock.o 00:02:06.183 CC lib/sock/sock_rpc.o 00:02:06.183 CC lib/thread/thread.o 00:02:06.183 CC lib/thread/iobuf.o 00:02:06.751 LIB libspdk_sock.a 00:02:06.751 SO libspdk_sock.so.10.0 00:02:06.751 SYMLINK libspdk_sock.so 00:02:07.010 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:07.010 CC lib/nvme/nvme_ctrlr.o 00:02:07.010 CC lib/nvme/nvme_fabric.o 00:02:07.010 CC lib/nvme/nvme_ns_cmd.o 00:02:07.010 CC lib/nvme/nvme_ns.o 00:02:07.010 CC lib/nvme/nvme_pcie_common.o 00:02:07.010 CC lib/nvme/nvme_pcie.o 00:02:07.010 CC lib/nvme/nvme_qpair.o 00:02:07.010 CC lib/nvme/nvme.o 00:02:07.010 CC lib/nvme/nvme_quirks.o 00:02:07.010 CC lib/nvme/nvme_transport.o 00:02:07.010 CC lib/nvme/nvme_discovery.o 00:02:07.010 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:07.010 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:07.010 CC lib/nvme/nvme_opal.o 00:02:07.010 CC lib/nvme/nvme_tcp.o 00:02:07.010 CC lib/nvme/nvme_io_msg.o 00:02:07.010 CC lib/nvme/nvme_poll_group.o 00:02:07.010 CC lib/nvme/nvme_zns.o 00:02:07.010 CC lib/nvme/nvme_stubs.o 00:02:07.010 CC lib/nvme/nvme_auth.o 00:02:07.010 CC lib/nvme/nvme_cuse.o 00:02:07.010 CC lib/nvme/nvme_vfio_user.o 00:02:07.010 CC lib/nvme/nvme_rdma.o 00:02:07.268 LIB libspdk_thread.a 00:02:07.268 SO libspdk_thread.so.10.1 00:02:07.527 SYMLINK libspdk_thread.so 00:02:07.786 CC lib/init/subsystem.o 00:02:07.786 CC lib/init/json_config.o 00:02:07.786 CC lib/init/rpc.o 00:02:07.786 CC lib/init/subsystem_rpc.o 00:02:07.786 CC lib/accel/accel.o 00:02:07.786 CC lib/vfu_tgt/tgt_endpoint.o 00:02:07.786 CC lib/accel/accel_rpc.o 00:02:07.786 CC lib/vfu_tgt/tgt_rpc.o 00:02:07.786 CC lib/accel/accel_sw.o 00:02:07.786 CC lib/virtio/virtio_vfio_user.o 00:02:07.786 CC lib/virtio/virtio.o 00:02:07.786 CC lib/virtio/virtio_vhost_user.o 00:02:07.786 CC lib/virtio/virtio_pci.o 00:02:07.786 CC lib/blob/blobstore.o 00:02:07.786 CC lib/blob/request.o 00:02:07.786 CC lib/blob/zeroes.o 00:02:07.786 CC lib/blob/blob_bs_dev.o 00:02:07.786 LIB libspdk_init.a 00:02:08.045 SO libspdk_init.so.5.0 00:02:08.045 LIB libspdk_vfu_tgt.a 00:02:08.045 LIB libspdk_virtio.a 00:02:08.045 SO libspdk_vfu_tgt.so.3.0 00:02:08.045 SYMLINK libspdk_init.so 00:02:08.045 SO libspdk_virtio.so.7.0 00:02:08.045 SYMLINK libspdk_vfu_tgt.so 00:02:08.045 SYMLINK libspdk_virtio.so 00:02:08.480 CC lib/event/app.o 00:02:08.480 CC lib/event/app_rpc.o 00:02:08.480 CC lib/event/reactor.o 00:02:08.480 CC lib/event/log_rpc.o 00:02:08.480 CC lib/event/scheduler_static.o 00:02:08.480 LIB libspdk_accel.a 00:02:08.480 SO libspdk_accel.so.16.0 00:02:08.480 SYMLINK libspdk_accel.so 00:02:08.480 LIB libspdk_nvme.a 00:02:08.480 LIB libspdk_event.a 00:02:08.737 SO libspdk_event.so.14.0 00:02:08.737 SO libspdk_nvme.so.13.1 00:02:08.737 SYMLINK libspdk_event.so 00:02:08.737 CC lib/bdev/bdev.o 00:02:08.737 CC lib/bdev/part.o 00:02:08.737 CC lib/bdev/bdev_rpc.o 00:02:08.737 CC lib/bdev/bdev_zone.o 00:02:08.737 CC lib/bdev/scsi_nvme.o 00:02:08.995 SYMLINK libspdk_nvme.so 00:02:09.929 LIB libspdk_blob.a 00:02:09.929 SO libspdk_blob.so.11.0 00:02:09.929 SYMLINK libspdk_blob.so 00:02:10.188 CC lib/lvol/lvol.o 00:02:10.188 CC lib/blobfs/blobfs.o 00:02:10.188 CC lib/blobfs/tree.o 00:02:10.446 LIB libspdk_bdev.a 00:02:10.704 SO libspdk_bdev.so.16.0 00:02:10.704 SYMLINK libspdk_bdev.so 00:02:10.704 LIB libspdk_blobfs.a 00:02:10.704 SO libspdk_blobfs.so.10.0 00:02:10.704 LIB libspdk_lvol.a 00:02:10.962 SO libspdk_lvol.so.10.0 00:02:10.962 SYMLINK libspdk_blobfs.so 00:02:10.962 SYMLINK libspdk_lvol.so 00:02:10.962 CC lib/scsi/lun.o 00:02:10.962 CC lib/scsi/port.o 00:02:10.962 CC lib/scsi/dev.o 00:02:10.962 CC lib/nbd/nbd.o 00:02:10.962 CC lib/nbd/nbd_rpc.o 00:02:10.962 CC lib/scsi/scsi.o 00:02:10.962 CC lib/scsi/scsi_bdev.o 00:02:10.962 CC lib/scsi/scsi_pr.o 00:02:10.962 CC lib/scsi/scsi_rpc.o 00:02:10.962 CC lib/scsi/task.o 00:02:10.962 CC lib/ublk/ublk_rpc.o 00:02:10.962 CC lib/ublk/ublk.o 00:02:10.962 CC lib/ftl/ftl_core.o 00:02:10.962 CC lib/ftl/ftl_init.o 00:02:10.962 CC lib/ftl/ftl_layout.o 00:02:10.962 CC lib/ftl/ftl_debug.o 00:02:10.962 CC lib/ftl/ftl_io.o 00:02:10.962 CC lib/ftl/ftl_l2p_flat.o 00:02:10.962 CC lib/ftl/ftl_sb.o 00:02:10.962 CC lib/ftl/ftl_l2p.o 00:02:10.962 CC lib/ftl/ftl_nv_cache.o 00:02:10.962 CC lib/nvmf/ctrlr.o 00:02:10.962 CC lib/ftl/ftl_band.o 00:02:10.962 CC lib/ftl/ftl_band_ops.o 00:02:10.962 CC lib/nvmf/ctrlr_discovery.o 00:02:10.962 CC lib/nvmf/ctrlr_bdev.o 00:02:10.962 CC lib/ftl/ftl_writer.o 00:02:10.962 CC lib/nvmf/subsystem.o 00:02:10.962 CC lib/ftl/ftl_rq.o 00:02:10.962 CC lib/ftl/ftl_reloc.o 00:02:10.962 CC lib/nvmf/nvmf.o 00:02:10.962 CC lib/nvmf/tcp.o 00:02:10.962 CC lib/nvmf/nvmf_rpc.o 00:02:10.962 CC lib/ftl/ftl_l2p_cache.o 00:02:10.962 CC lib/ftl/ftl_p2l.o 00:02:10.962 CC lib/nvmf/transport.o 00:02:10.962 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:10.962 CC lib/ftl/mngt/ftl_mngt.o 00:02:10.962 CC lib/nvmf/stubs.o 00:02:10.962 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:10.962 CC lib/nvmf/mdns_server.o 00:02:10.962 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:10.963 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:10.963 CC lib/nvmf/vfio_user.o 00:02:10.963 CC lib/nvmf/rdma.o 00:02:10.963 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:10.963 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:10.963 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:10.963 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:10.963 CC lib/nvmf/auth.o 00:02:10.963 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:10.963 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:10.963 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:10.963 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:10.963 CC lib/ftl/utils/ftl_conf.o 00:02:10.963 CC lib/ftl/utils/ftl_mempool.o 00:02:10.963 CC lib/ftl/utils/ftl_bitmap.o 00:02:10.963 CC lib/ftl/utils/ftl_property.o 00:02:10.963 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:10.963 CC lib/ftl/utils/ftl_md.o 00:02:10.963 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:10.963 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:10.963 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:10.963 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:10.963 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:10.963 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:10.963 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:10.963 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:10.963 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:10.963 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:10.963 CC lib/ftl/base/ftl_base_dev.o 00:02:10.963 CC lib/ftl/base/ftl_base_bdev.o 00:02:10.963 CC lib/ftl/ftl_trace.o 00:02:11.530 LIB libspdk_scsi.a 00:02:11.530 LIB libspdk_nbd.a 00:02:11.530 SO libspdk_scsi.so.9.0 00:02:11.530 SO libspdk_nbd.so.7.0 00:02:11.530 SYMLINK libspdk_nbd.so 00:02:11.530 SYMLINK libspdk_scsi.so 00:02:11.789 LIB libspdk_ublk.a 00:02:11.789 SO libspdk_ublk.so.3.0 00:02:11.789 SYMLINK libspdk_ublk.so 00:02:11.789 CC lib/vhost/vhost.o 00:02:11.789 CC lib/vhost/vhost_scsi.o 00:02:11.789 CC lib/vhost/vhost_rpc.o 00:02:11.789 CC lib/vhost/vhost_blk.o 00:02:11.789 CC lib/vhost/rte_vhost_user.o 00:02:11.789 CC lib/iscsi/conn.o 00:02:11.789 CC lib/iscsi/iscsi.o 00:02:11.789 CC lib/iscsi/init_grp.o 00:02:11.789 CC lib/iscsi/md5.o 00:02:11.789 CC lib/iscsi/param.o 00:02:11.789 CC lib/iscsi/tgt_node.o 00:02:11.789 LIB libspdk_ftl.a 00:02:11.789 CC lib/iscsi/portal_grp.o 00:02:11.789 CC lib/iscsi/iscsi_subsystem.o 00:02:11.789 CC lib/iscsi/iscsi_rpc.o 00:02:11.789 CC lib/iscsi/task.o 00:02:12.046 SO libspdk_ftl.so.9.0 00:02:12.304 SYMLINK libspdk_ftl.so 00:02:12.563 LIB libspdk_vhost.a 00:02:12.823 SO libspdk_vhost.so.8.0 00:02:12.823 LIB libspdk_nvmf.a 00:02:12.823 SYMLINK libspdk_vhost.so 00:02:12.823 SO libspdk_nvmf.so.19.0 00:02:12.823 LIB libspdk_iscsi.a 00:02:12.823 SO libspdk_iscsi.so.8.0 00:02:13.082 SYMLINK libspdk_nvmf.so 00:02:13.082 SYMLINK libspdk_iscsi.so 00:02:13.652 CC module/env_dpdk/env_dpdk_rpc.o 00:02:13.652 CC module/vfu_device/vfu_virtio.o 00:02:13.652 CC module/vfu_device/vfu_virtio_blk.o 00:02:13.652 CC module/vfu_device/vfu_virtio_rpc.o 00:02:13.652 CC module/vfu_device/vfu_virtio_scsi.o 00:02:13.652 LIB libspdk_env_dpdk_rpc.a 00:02:13.652 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:13.652 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:13.652 CC module/keyring/file/keyring.o 00:02:13.652 SO libspdk_env_dpdk_rpc.so.6.0 00:02:13.652 CC module/keyring/file/keyring_rpc.o 00:02:13.652 CC module/accel/iaa/accel_iaa.o 00:02:13.652 CC module/accel/iaa/accel_iaa_rpc.o 00:02:13.652 CC module/blob/bdev/blob_bdev.o 00:02:13.652 CC module/accel/error/accel_error.o 00:02:13.652 CC module/accel/error/accel_error_rpc.o 00:02:13.652 CC module/accel/ioat/accel_ioat.o 00:02:13.652 CC module/scheduler/gscheduler/gscheduler.o 00:02:13.652 CC module/accel/ioat/accel_ioat_rpc.o 00:02:13.652 CC module/sock/posix/posix.o 00:02:13.652 CC module/accel/dsa/accel_dsa.o 00:02:13.652 CC module/accel/dsa/accel_dsa_rpc.o 00:02:13.652 CC module/keyring/linux/keyring_rpc.o 00:02:13.652 CC module/keyring/linux/keyring.o 00:02:13.652 SYMLINK libspdk_env_dpdk_rpc.so 00:02:13.911 LIB libspdk_scheduler_dpdk_governor.a 00:02:13.911 LIB libspdk_keyring_file.a 00:02:13.911 LIB libspdk_scheduler_gscheduler.a 00:02:13.911 LIB libspdk_scheduler_dynamic.a 00:02:13.911 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:13.911 LIB libspdk_keyring_linux.a 00:02:13.911 SO libspdk_keyring_file.so.1.0 00:02:13.911 LIB libspdk_accel_ioat.a 00:02:13.911 SO libspdk_scheduler_gscheduler.so.4.0 00:02:13.911 LIB libspdk_accel_error.a 00:02:13.911 SO libspdk_scheduler_dynamic.so.4.0 00:02:13.911 SO libspdk_keyring_linux.so.1.0 00:02:13.911 LIB libspdk_accel_iaa.a 00:02:13.911 SO libspdk_accel_ioat.so.6.0 00:02:13.911 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:13.911 SO libspdk_accel_error.so.2.0 00:02:13.911 LIB libspdk_blob_bdev.a 00:02:13.911 SYMLINK libspdk_keyring_file.so 00:02:13.911 SO libspdk_accel_iaa.so.3.0 00:02:13.911 LIB libspdk_accel_dsa.a 00:02:13.911 SYMLINK libspdk_scheduler_dynamic.so 00:02:13.911 SYMLINK libspdk_scheduler_gscheduler.so 00:02:13.911 SYMLINK libspdk_keyring_linux.so 00:02:13.911 SO libspdk_blob_bdev.so.11.0 00:02:13.911 SYMLINK libspdk_accel_ioat.so 00:02:13.911 SO libspdk_accel_dsa.so.5.0 00:02:13.911 SYMLINK libspdk_accel_error.so 00:02:13.911 SYMLINK libspdk_accel_iaa.so 00:02:13.911 SYMLINK libspdk_blob_bdev.so 00:02:13.911 LIB libspdk_vfu_device.a 00:02:13.911 SYMLINK libspdk_accel_dsa.so 00:02:13.911 SO libspdk_vfu_device.so.3.0 00:02:14.171 SYMLINK libspdk_vfu_device.so 00:02:14.171 LIB libspdk_sock_posix.a 00:02:14.171 SO libspdk_sock_posix.so.6.0 00:02:14.431 SYMLINK libspdk_sock_posix.so 00:02:14.431 CC module/bdev/delay/vbdev_delay.o 00:02:14.431 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:14.431 CC module/bdev/error/vbdev_error.o 00:02:14.431 CC module/bdev/error/vbdev_error_rpc.o 00:02:14.431 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:14.431 CC module/bdev/lvol/vbdev_lvol.o 00:02:14.431 CC module/bdev/gpt/gpt.o 00:02:14.431 CC module/bdev/nvme/bdev_nvme.o 00:02:14.431 CC module/bdev/gpt/vbdev_gpt.o 00:02:14.431 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:14.431 CC module/bdev/nvme/bdev_mdns_client.o 00:02:14.431 CC module/bdev/nvme/nvme_rpc.o 00:02:14.431 CC module/bdev/nvme/vbdev_opal.o 00:02:14.431 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:14.431 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:14.431 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:14.431 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:14.431 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:14.431 CC module/blobfs/bdev/blobfs_bdev.o 00:02:14.431 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:14.431 CC module/bdev/null/bdev_null.o 00:02:14.431 CC module/bdev/null/bdev_null_rpc.o 00:02:14.431 CC module/bdev/raid/bdev_raid_rpc.o 00:02:14.431 CC module/bdev/raid/bdev_raid.o 00:02:14.431 CC module/bdev/raid/raid0.o 00:02:14.431 CC module/bdev/raid/bdev_raid_sb.o 00:02:14.431 CC module/bdev/raid/raid1.o 00:02:14.431 CC module/bdev/split/vbdev_split.o 00:02:14.431 CC module/bdev/raid/concat.o 00:02:14.431 CC module/bdev/split/vbdev_split_rpc.o 00:02:14.431 CC module/bdev/aio/bdev_aio.o 00:02:14.431 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:14.431 CC module/bdev/iscsi/bdev_iscsi.o 00:02:14.431 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:14.431 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:14.431 CC module/bdev/aio/bdev_aio_rpc.o 00:02:14.431 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:14.431 CC module/bdev/ftl/bdev_ftl.o 00:02:14.431 CC module/bdev/malloc/bdev_malloc.o 00:02:14.431 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:14.431 CC module/bdev/passthru/vbdev_passthru.o 00:02:14.431 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:14.690 LIB libspdk_blobfs_bdev.a 00:02:14.690 LIB libspdk_bdev_error.a 00:02:14.690 SO libspdk_blobfs_bdev.so.6.0 00:02:14.690 SO libspdk_bdev_error.so.6.0 00:02:14.690 LIB libspdk_bdev_split.a 00:02:14.690 SYMLINK libspdk_blobfs_bdev.so 00:02:14.690 LIB libspdk_bdev_null.a 00:02:14.690 LIB libspdk_bdev_ftl.a 00:02:14.690 LIB libspdk_bdev_gpt.a 00:02:14.690 SO libspdk_bdev_split.so.6.0 00:02:14.690 SYMLINK libspdk_bdev_error.so 00:02:14.690 LIB libspdk_bdev_passthru.a 00:02:14.691 SO libspdk_bdev_null.so.6.0 00:02:14.691 LIB libspdk_bdev_delay.a 00:02:14.691 SO libspdk_bdev_ftl.so.6.0 00:02:14.691 LIB libspdk_bdev_aio.a 00:02:14.691 SO libspdk_bdev_gpt.so.6.0 00:02:14.691 LIB libspdk_bdev_zone_block.a 00:02:14.691 SO libspdk_bdev_passthru.so.6.0 00:02:14.950 SO libspdk_bdev_delay.so.6.0 00:02:14.950 LIB libspdk_bdev_iscsi.a 00:02:14.950 LIB libspdk_bdev_malloc.a 00:02:14.950 SO libspdk_bdev_aio.so.6.0 00:02:14.950 SYMLINK libspdk_bdev_split.so 00:02:14.950 SYMLINK libspdk_bdev_null.so 00:02:14.950 SO libspdk_bdev_zone_block.so.6.0 00:02:14.950 SYMLINK libspdk_bdev_gpt.so 00:02:14.950 SYMLINK libspdk_bdev_ftl.so 00:02:14.950 SO libspdk_bdev_iscsi.so.6.0 00:02:14.950 SO libspdk_bdev_malloc.so.6.0 00:02:14.950 SYMLINK libspdk_bdev_passthru.so 00:02:14.950 SYMLINK libspdk_bdev_delay.so 00:02:14.950 SYMLINK libspdk_bdev_aio.so 00:02:14.950 LIB libspdk_bdev_lvol.a 00:02:14.950 SYMLINK libspdk_bdev_zone_block.so 00:02:14.950 LIB libspdk_bdev_virtio.a 00:02:14.950 SYMLINK libspdk_bdev_iscsi.so 00:02:14.950 SYMLINK libspdk_bdev_malloc.so 00:02:14.950 SO libspdk_bdev_lvol.so.6.0 00:02:14.950 SO libspdk_bdev_virtio.so.6.0 00:02:14.950 SYMLINK libspdk_bdev_lvol.so 00:02:14.950 SYMLINK libspdk_bdev_virtio.so 00:02:15.210 LIB libspdk_bdev_raid.a 00:02:15.210 SO libspdk_bdev_raid.so.6.0 00:02:15.469 SYMLINK libspdk_bdev_raid.so 00:02:16.038 LIB libspdk_bdev_nvme.a 00:02:16.038 SO libspdk_bdev_nvme.so.7.0 00:02:16.298 SYMLINK libspdk_bdev_nvme.so 00:02:16.899 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:16.899 CC module/event/subsystems/sock/sock.o 00:02:16.899 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:16.899 CC module/event/subsystems/keyring/keyring.o 00:02:16.899 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:16.899 CC module/event/subsystems/vmd/vmd.o 00:02:16.899 CC module/event/subsystems/iobuf/iobuf.o 00:02:16.899 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:16.899 CC module/event/subsystems/scheduler/scheduler.o 00:02:16.899 LIB libspdk_event_vfu_tgt.a 00:02:16.899 LIB libspdk_event_vhost_blk.a 00:02:16.899 LIB libspdk_event_keyring.a 00:02:16.899 LIB libspdk_event_sock.a 00:02:16.899 SO libspdk_event_vfu_tgt.so.3.0 00:02:16.899 SO libspdk_event_vhost_blk.so.3.0 00:02:16.899 LIB libspdk_event_vmd.a 00:02:16.899 SO libspdk_event_keyring.so.1.0 00:02:16.899 SO libspdk_event_sock.so.5.0 00:02:16.899 LIB libspdk_event_scheduler.a 00:02:16.899 LIB libspdk_event_iobuf.a 00:02:16.899 SO libspdk_event_iobuf.so.3.0 00:02:16.899 SYMLINK libspdk_event_vfu_tgt.so 00:02:16.899 SO libspdk_event_vmd.so.6.0 00:02:16.899 SYMLINK libspdk_event_vhost_blk.so 00:02:16.899 SO libspdk_event_scheduler.so.4.0 00:02:16.899 SYMLINK libspdk_event_keyring.so 00:02:17.196 SYMLINK libspdk_event_sock.so 00:02:17.196 SYMLINK libspdk_event_iobuf.so 00:02:17.196 SYMLINK libspdk_event_scheduler.so 00:02:17.196 SYMLINK libspdk_event_vmd.so 00:02:17.196 CC module/event/subsystems/accel/accel.o 00:02:17.456 LIB libspdk_event_accel.a 00:02:17.456 SO libspdk_event_accel.so.6.0 00:02:17.456 SYMLINK libspdk_event_accel.so 00:02:17.717 CC module/event/subsystems/bdev/bdev.o 00:02:17.977 LIB libspdk_event_bdev.a 00:02:17.977 SO libspdk_event_bdev.so.6.0 00:02:17.977 SYMLINK libspdk_event_bdev.so 00:02:18.236 CC module/event/subsystems/scsi/scsi.o 00:02:18.236 CC module/event/subsystems/ublk/ublk.o 00:02:18.236 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:18.237 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:18.237 CC module/event/subsystems/nbd/nbd.o 00:02:18.496 LIB libspdk_event_ublk.a 00:02:18.496 LIB libspdk_event_scsi.a 00:02:18.496 SO libspdk_event_ublk.so.3.0 00:02:18.496 LIB libspdk_event_nbd.a 00:02:18.496 SO libspdk_event_scsi.so.6.0 00:02:18.496 SO libspdk_event_nbd.so.6.0 00:02:18.496 LIB libspdk_event_nvmf.a 00:02:18.496 SYMLINK libspdk_event_ublk.so 00:02:18.496 SYMLINK libspdk_event_scsi.so 00:02:18.496 SO libspdk_event_nvmf.so.6.0 00:02:18.496 SYMLINK libspdk_event_nbd.so 00:02:18.756 SYMLINK libspdk_event_nvmf.so 00:02:18.756 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:18.756 CC module/event/subsystems/iscsi/iscsi.o 00:02:19.017 LIB libspdk_event_vhost_scsi.a 00:02:19.017 LIB libspdk_event_iscsi.a 00:02:19.017 SO libspdk_event_vhost_scsi.so.3.0 00:02:19.017 SO libspdk_event_iscsi.so.6.0 00:02:19.017 SYMLINK libspdk_event_vhost_scsi.so 00:02:19.017 SYMLINK libspdk_event_iscsi.so 00:02:19.277 SO libspdk.so.6.0 00:02:19.277 SYMLINK libspdk.so 00:02:19.545 CC app/trace_record/trace_record.o 00:02:19.545 TEST_HEADER include/spdk/accel.h 00:02:19.546 TEST_HEADER include/spdk/accel_module.h 00:02:19.546 TEST_HEADER include/spdk/assert.h 00:02:19.546 TEST_HEADER include/spdk/barrier.h 00:02:19.546 TEST_HEADER include/spdk/bdev.h 00:02:19.546 TEST_HEADER include/spdk/bdev_module.h 00:02:19.546 CC app/spdk_nvme_discover/discovery_aer.o 00:02:19.546 TEST_HEADER include/spdk/base64.h 00:02:19.546 TEST_HEADER include/spdk/bdev_zone.h 00:02:19.546 TEST_HEADER include/spdk/bit_array.h 00:02:19.546 TEST_HEADER include/spdk/blob_bdev.h 00:02:19.546 TEST_HEADER include/spdk/bit_pool.h 00:02:19.546 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:19.546 TEST_HEADER include/spdk/blob.h 00:02:19.546 TEST_HEADER include/spdk/blobfs.h 00:02:19.546 TEST_HEADER include/spdk/conf.h 00:02:19.546 CC app/spdk_nvme_perf/perf.o 00:02:19.546 TEST_HEADER include/spdk/cpuset.h 00:02:19.546 TEST_HEADER include/spdk/config.h 00:02:19.546 TEST_HEADER include/spdk/crc32.h 00:02:19.546 CC test/rpc_client/rpc_client_test.o 00:02:19.546 TEST_HEADER include/spdk/crc16.h 00:02:19.546 TEST_HEADER include/spdk/crc64.h 00:02:19.546 TEST_HEADER include/spdk/dif.h 00:02:19.546 TEST_HEADER include/spdk/dma.h 00:02:19.546 TEST_HEADER include/spdk/endian.h 00:02:19.546 TEST_HEADER include/spdk/env.h 00:02:19.546 TEST_HEADER include/spdk/event.h 00:02:19.546 TEST_HEADER include/spdk/env_dpdk.h 00:02:19.546 TEST_HEADER include/spdk/fd.h 00:02:19.546 CXX app/trace/trace.o 00:02:19.546 TEST_HEADER include/spdk/file.h 00:02:19.546 TEST_HEADER include/spdk/fd_group.h 00:02:19.546 TEST_HEADER include/spdk/ftl.h 00:02:19.546 TEST_HEADER include/spdk/gpt_spec.h 00:02:19.546 TEST_HEADER include/spdk/hexlify.h 00:02:19.546 TEST_HEADER include/spdk/idxd.h 00:02:19.546 TEST_HEADER include/spdk/histogram_data.h 00:02:19.546 TEST_HEADER include/spdk/idxd_spec.h 00:02:19.546 TEST_HEADER include/spdk/init.h 00:02:19.546 CC app/spdk_nvme_identify/identify.o 00:02:19.546 TEST_HEADER include/spdk/iscsi_spec.h 00:02:19.546 TEST_HEADER include/spdk/ioat.h 00:02:19.546 TEST_HEADER include/spdk/ioat_spec.h 00:02:19.546 TEST_HEADER include/spdk/jsonrpc.h 00:02:19.546 TEST_HEADER include/spdk/json.h 00:02:19.546 TEST_HEADER include/spdk/keyring.h 00:02:19.546 TEST_HEADER include/spdk/likely.h 00:02:19.546 CC app/spdk_lspci/spdk_lspci.o 00:02:19.546 TEST_HEADER include/spdk/lvol.h 00:02:19.546 TEST_HEADER include/spdk/log.h 00:02:19.546 TEST_HEADER include/spdk/keyring_module.h 00:02:19.546 TEST_HEADER include/spdk/memory.h 00:02:19.546 TEST_HEADER include/spdk/mmio.h 00:02:19.546 TEST_HEADER include/spdk/nbd.h 00:02:19.546 TEST_HEADER include/spdk/net.h 00:02:19.546 TEST_HEADER include/spdk/notify.h 00:02:19.546 TEST_HEADER include/spdk/nvme_intel.h 00:02:19.546 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:19.546 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:19.546 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:19.546 CC app/spdk_top/spdk_top.o 00:02:19.546 TEST_HEADER include/spdk/nvme.h 00:02:19.546 TEST_HEADER include/spdk/nvme_spec.h 00:02:19.546 TEST_HEADER include/spdk/nvme_zns.h 00:02:19.546 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:19.546 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:19.546 TEST_HEADER include/spdk/nvmf.h 00:02:19.546 TEST_HEADER include/spdk/nvmf_spec.h 00:02:19.546 TEST_HEADER include/spdk/opal.h 00:02:19.546 TEST_HEADER include/spdk/nvmf_transport.h 00:02:19.546 TEST_HEADER include/spdk/opal_spec.h 00:02:19.546 TEST_HEADER include/spdk/pipe.h 00:02:19.546 TEST_HEADER include/spdk/pci_ids.h 00:02:19.546 TEST_HEADER include/spdk/rpc.h 00:02:19.546 TEST_HEADER include/spdk/queue.h 00:02:19.546 TEST_HEADER include/spdk/scheduler.h 00:02:19.546 TEST_HEADER include/spdk/reduce.h 00:02:19.546 TEST_HEADER include/spdk/scsi.h 00:02:19.546 TEST_HEADER include/spdk/scsi_spec.h 00:02:19.546 TEST_HEADER include/spdk/sock.h 00:02:19.546 TEST_HEADER include/spdk/thread.h 00:02:19.546 TEST_HEADER include/spdk/stdinc.h 00:02:19.546 TEST_HEADER include/spdk/trace.h 00:02:19.546 TEST_HEADER include/spdk/string.h 00:02:19.546 TEST_HEADER include/spdk/tree.h 00:02:19.546 TEST_HEADER include/spdk/ublk.h 00:02:19.546 TEST_HEADER include/spdk/trace_parser.h 00:02:19.546 TEST_HEADER include/spdk/util.h 00:02:19.546 TEST_HEADER include/spdk/version.h 00:02:19.546 TEST_HEADER include/spdk/uuid.h 00:02:19.546 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:19.546 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:19.546 TEST_HEADER include/spdk/vmd.h 00:02:19.546 TEST_HEADER include/spdk/vhost.h 00:02:19.546 TEST_HEADER include/spdk/xor.h 00:02:19.546 TEST_HEADER include/spdk/zipf.h 00:02:19.546 CXX test/cpp_headers/accel_module.o 00:02:19.546 CXX test/cpp_headers/accel.o 00:02:19.546 CXX test/cpp_headers/assert.o 00:02:19.546 CXX test/cpp_headers/barrier.o 00:02:19.546 CXX test/cpp_headers/bdev.o 00:02:19.546 CXX test/cpp_headers/base64.o 00:02:19.546 CXX test/cpp_headers/bdev_zone.o 00:02:19.546 CXX test/cpp_headers/bit_array.o 00:02:19.546 CXX test/cpp_headers/bdev_module.o 00:02:19.546 CXX test/cpp_headers/bit_pool.o 00:02:19.546 CXX test/cpp_headers/blob_bdev.o 00:02:19.546 CXX test/cpp_headers/blobfs_bdev.o 00:02:19.546 CC app/nvmf_tgt/nvmf_main.o 00:02:19.546 CXX test/cpp_headers/blobfs.o 00:02:19.546 CC app/iscsi_tgt/iscsi_tgt.o 00:02:19.546 CXX test/cpp_headers/blob.o 00:02:19.546 CXX test/cpp_headers/conf.o 00:02:19.546 CXX test/cpp_headers/cpuset.o 00:02:19.547 CXX test/cpp_headers/config.o 00:02:19.547 CXX test/cpp_headers/crc16.o 00:02:19.547 CC app/spdk_dd/spdk_dd.o 00:02:19.547 CXX test/cpp_headers/crc64.o 00:02:19.547 CXX test/cpp_headers/dif.o 00:02:19.547 CXX test/cpp_headers/crc32.o 00:02:19.547 CXX test/cpp_headers/endian.o 00:02:19.547 CXX test/cpp_headers/env_dpdk.o 00:02:19.547 CXX test/cpp_headers/env.o 00:02:19.547 CXX test/cpp_headers/dma.o 00:02:19.547 CXX test/cpp_headers/fd_group.o 00:02:19.547 CXX test/cpp_headers/fd.o 00:02:19.547 CXX test/cpp_headers/file.o 00:02:19.547 CXX test/cpp_headers/event.o 00:02:19.547 CXX test/cpp_headers/ftl.o 00:02:19.547 CXX test/cpp_headers/gpt_spec.o 00:02:19.547 CXX test/cpp_headers/histogram_data.o 00:02:19.547 CXX test/cpp_headers/hexlify.o 00:02:19.547 CXX test/cpp_headers/idxd_spec.o 00:02:19.547 CXX test/cpp_headers/idxd.o 00:02:19.547 CXX test/cpp_headers/init.o 00:02:19.547 CXX test/cpp_headers/ioat.o 00:02:19.547 CXX test/cpp_headers/ioat_spec.o 00:02:19.547 CXX test/cpp_headers/json.o 00:02:19.547 CXX test/cpp_headers/iscsi_spec.o 00:02:19.547 CXX test/cpp_headers/jsonrpc.o 00:02:19.547 CXX test/cpp_headers/keyring.o 00:02:19.547 CXX test/cpp_headers/keyring_module.o 00:02:19.547 CXX test/cpp_headers/likely.o 00:02:19.547 CXX test/cpp_headers/log.o 00:02:19.547 CXX test/cpp_headers/lvol.o 00:02:19.547 CXX test/cpp_headers/memory.o 00:02:19.547 CXX test/cpp_headers/mmio.o 00:02:19.547 CXX test/cpp_headers/nbd.o 00:02:19.547 CXX test/cpp_headers/net.o 00:02:19.547 CXX test/cpp_headers/nvme.o 00:02:19.547 CXX test/cpp_headers/notify.o 00:02:19.547 CXX test/cpp_headers/nvme_intel.o 00:02:19.547 CXX test/cpp_headers/nvme_ocssd.o 00:02:19.547 CXX test/cpp_headers/nvme_spec.o 00:02:19.547 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:19.547 CXX test/cpp_headers/nvme_zns.o 00:02:19.547 CXX test/cpp_headers/nvmf_cmd.o 00:02:19.547 CC app/spdk_tgt/spdk_tgt.o 00:02:19.832 CXX test/cpp_headers/nvmf.o 00:02:19.832 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:19.832 CXX test/cpp_headers/nvmf_spec.o 00:02:19.832 CXX test/cpp_headers/opal.o 00:02:19.832 CXX test/cpp_headers/nvmf_transport.o 00:02:19.832 CXX test/cpp_headers/opal_spec.o 00:02:19.832 CXX test/cpp_headers/pci_ids.o 00:02:19.832 CXX test/cpp_headers/pipe.o 00:02:19.832 CXX test/cpp_headers/queue.o 00:02:19.832 CXX test/cpp_headers/reduce.o 00:02:19.832 CC test/env/vtophys/vtophys.o 00:02:19.832 CC test/env/pci/pci_ut.o 00:02:19.832 CC test/env/memory/memory_ut.o 00:02:19.832 CC examples/util/zipf/zipf.o 00:02:19.832 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:19.832 CC examples/ioat/perf/perf.o 00:02:19.832 CC test/thread/poller_perf/poller_perf.o 00:02:19.832 CC examples/ioat/verify/verify.o 00:02:19.832 CC test/app/jsoncat/jsoncat.o 00:02:19.832 CC test/dma/test_dma/test_dma.o 00:02:19.832 CXX test/cpp_headers/rpc.o 00:02:19.832 CC test/app/stub/stub.o 00:02:19.832 CC test/app/histogram_perf/histogram_perf.o 00:02:20.105 CC app/fio/nvme/fio_plugin.o 00:02:20.105 CC test/app/bdev_svc/bdev_svc.o 00:02:20.105 LINK spdk_lspci 00:02:20.105 CC app/fio/bdev/fio_plugin.o 00:02:20.105 LINK rpc_client_test 00:02:20.105 LINK spdk_nvme_discover 00:02:20.105 CC test/env/mem_callbacks/mem_callbacks.o 00:02:20.105 LINK spdk_trace_record 00:02:20.366 LINK interrupt_tgt 00:02:20.366 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:20.366 CXX test/cpp_headers/scheduler.o 00:02:20.366 LINK zipf 00:02:20.366 CXX test/cpp_headers/scsi.o 00:02:20.366 CXX test/cpp_headers/scsi_spec.o 00:02:20.366 CXX test/cpp_headers/sock.o 00:02:20.366 CXX test/cpp_headers/stdinc.o 00:02:20.366 LINK jsoncat 00:02:20.366 CXX test/cpp_headers/thread.o 00:02:20.366 CXX test/cpp_headers/string.o 00:02:20.366 CXX test/cpp_headers/trace.o 00:02:20.366 LINK poller_perf 00:02:20.366 CXX test/cpp_headers/trace_parser.o 00:02:20.366 CXX test/cpp_headers/tree.o 00:02:20.366 CXX test/cpp_headers/ublk.o 00:02:20.366 LINK iscsi_tgt 00:02:20.366 CXX test/cpp_headers/util.o 00:02:20.366 CXX test/cpp_headers/uuid.o 00:02:20.366 LINK nvmf_tgt 00:02:20.366 CXX test/cpp_headers/version.o 00:02:20.366 CXX test/cpp_headers/vfio_user_pci.o 00:02:20.366 CXX test/cpp_headers/vfio_user_spec.o 00:02:20.366 CXX test/cpp_headers/vhost.o 00:02:20.366 CXX test/cpp_headers/vmd.o 00:02:20.366 CXX test/cpp_headers/xor.o 00:02:20.366 CXX test/cpp_headers/zipf.o 00:02:20.366 LINK vtophys 00:02:20.366 LINK stub 00:02:20.366 LINK verify 00:02:20.366 LINK env_dpdk_post_init 00:02:20.366 LINK histogram_perf 00:02:20.366 LINK spdk_tgt 00:02:20.366 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:20.366 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:20.366 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:20.366 LINK ioat_perf 00:02:20.366 LINK spdk_dd 00:02:20.366 LINK bdev_svc 00:02:20.640 LINK pci_ut 00:02:20.640 LINK spdk_trace 00:02:20.640 LINK test_dma 00:02:20.640 CC examples/idxd/perf/perf.o 00:02:20.897 LINK spdk_nvme 00:02:20.897 LINK spdk_nvme_perf 00:02:20.897 CC test/event/reactor_perf/reactor_perf.o 00:02:20.897 CC examples/sock/hello_world/hello_sock.o 00:02:20.897 CC examples/vmd/lsvmd/lsvmd.o 00:02:20.897 CC examples/vmd/led/led.o 00:02:20.897 CC test/event/event_perf/event_perf.o 00:02:20.897 CC test/event/reactor/reactor.o 00:02:20.897 LINK nvme_fuzz 00:02:20.897 LINK vhost_fuzz 00:02:20.897 CC test/event/app_repeat/app_repeat.o 00:02:20.897 CC examples/thread/thread/thread_ex.o 00:02:20.897 LINK spdk_bdev 00:02:20.897 LINK spdk_nvme_identify 00:02:20.897 CC test/event/scheduler/scheduler.o 00:02:20.897 LINK mem_callbacks 00:02:20.897 LINK spdk_top 00:02:20.897 LINK reactor_perf 00:02:20.897 LINK lsvmd 00:02:20.897 CC app/vhost/vhost.o 00:02:20.897 LINK reactor 00:02:20.897 LINK event_perf 00:02:20.897 LINK led 00:02:20.897 LINK app_repeat 00:02:20.897 LINK hello_sock 00:02:21.155 LINK idxd_perf 00:02:21.155 LINK thread 00:02:21.155 CC test/nvme/boot_partition/boot_partition.o 00:02:21.155 CC test/nvme/aer/aer.o 00:02:21.155 CC test/nvme/sgl/sgl.o 00:02:21.155 CC test/nvme/overhead/overhead.o 00:02:21.155 CC test/nvme/cuse/cuse.o 00:02:21.155 CC test/nvme/e2edp/nvme_dp.o 00:02:21.155 CC test/nvme/simple_copy/simple_copy.o 00:02:21.155 CC test/nvme/fused_ordering/fused_ordering.o 00:02:21.155 CC test/nvme/reserve/reserve.o 00:02:21.155 CC test/nvme/err_injection/err_injection.o 00:02:21.155 CC test/nvme/startup/startup.o 00:02:21.155 LINK scheduler 00:02:21.155 CC test/nvme/reset/reset.o 00:02:21.155 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:21.155 CC test/nvme/compliance/nvme_compliance.o 00:02:21.155 CC test/nvme/connect_stress/connect_stress.o 00:02:21.155 CC test/nvme/fdp/fdp.o 00:02:21.155 CC test/blobfs/mkfs/mkfs.o 00:02:21.155 CC test/accel/dif/dif.o 00:02:21.155 LINK vhost 00:02:21.155 LINK memory_ut 00:02:21.155 CC test/lvol/esnap/esnap.o 00:02:21.156 LINK boot_partition 00:02:21.156 LINK startup 00:02:21.156 LINK fused_ordering 00:02:21.156 LINK reserve 00:02:21.414 LINK doorbell_aers 00:02:21.414 LINK err_injection 00:02:21.414 LINK connect_stress 00:02:21.414 LINK simple_copy 00:02:21.414 LINK sgl 00:02:21.414 LINK aer 00:02:21.414 LINK nvme_dp 00:02:21.414 LINK mkfs 00:02:21.414 LINK reset 00:02:21.414 LINK overhead 00:02:21.414 LINK nvme_compliance 00:02:21.414 LINK fdp 00:02:21.414 CC examples/nvme/reconnect/reconnect.o 00:02:21.414 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:21.414 CC examples/nvme/abort/abort.o 00:02:21.414 CC examples/nvme/hello_world/hello_world.o 00:02:21.414 CC examples/nvme/hotplug/hotplug.o 00:02:21.414 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:21.414 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:21.414 CC examples/nvme/arbitration/arbitration.o 00:02:21.414 CC examples/accel/perf/accel_perf.o 00:02:21.674 CC examples/blob/cli/blobcli.o 00:02:21.674 CC examples/blob/hello_world/hello_blob.o 00:02:21.674 LINK dif 00:02:21.674 LINK pmr_persistence 00:02:21.674 LINK cmb_copy 00:02:21.674 LINK hello_world 00:02:21.674 LINK hotplug 00:02:21.674 LINK reconnect 00:02:21.674 LINK arbitration 00:02:21.674 LINK abort 00:02:21.674 LINK iscsi_fuzz 00:02:21.933 LINK hello_blob 00:02:21.933 LINK nvme_manage 00:02:21.933 LINK accel_perf 00:02:21.933 LINK blobcli 00:02:22.191 CC test/bdev/bdevio/bdevio.o 00:02:22.191 LINK cuse 00:02:22.449 CC examples/bdev/hello_world/hello_bdev.o 00:02:22.449 CC examples/bdev/bdevperf/bdevperf.o 00:02:22.449 LINK bdevio 00:02:22.708 LINK hello_bdev 00:02:22.967 LINK bdevperf 00:02:23.535 CC examples/nvmf/nvmf/nvmf.o 00:02:23.535 LINK nvmf 00:02:24.472 LINK esnap 00:02:25.040 00:02:25.040 real 0m43.984s 00:02:25.040 user 6m29.275s 00:02:25.040 sys 3m28.076s 00:02:25.040 19:38:16 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:25.040 19:38:16 make -- common/autotest_common.sh@10 -- $ set +x 00:02:25.040 ************************************ 00:02:25.040 END TEST make 00:02:25.040 ************************************ 00:02:25.040 19:38:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:25.040 19:38:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:25.040 19:38:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:25.040 19:38:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.040 19:38:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:25.040 19:38:16 -- pm/common@44 -- $ pid=1763751 00:02:25.040 19:38:16 -- pm/common@50 -- $ kill -TERM 1763751 00:02:25.040 19:38:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.040 19:38:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:25.040 19:38:16 -- pm/common@44 -- $ pid=1763752 00:02:25.040 19:38:16 -- pm/common@50 -- $ kill -TERM 1763752 00:02:25.040 19:38:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.040 19:38:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:25.040 19:38:16 -- pm/common@44 -- $ pid=1763754 00:02:25.040 19:38:16 -- pm/common@50 -- $ kill -TERM 1763754 00:02:25.040 19:38:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.040 19:38:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:25.040 19:38:16 -- pm/common@44 -- $ pid=1763777 00:02:25.040 19:38:16 -- pm/common@50 -- $ sudo -E kill -TERM 1763777 00:02:25.040 19:38:16 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:25.040 19:38:16 -- nvmf/common.sh@7 -- # uname -s 00:02:25.040 19:38:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:25.040 19:38:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:25.040 19:38:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:25.040 19:38:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:25.040 19:38:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:25.040 19:38:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:25.040 19:38:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:25.040 19:38:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:25.040 19:38:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:25.040 19:38:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:25.040 19:38:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:25.040 19:38:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:25.040 19:38:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:25.040 19:38:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:25.040 19:38:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:25.040 19:38:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:25.040 19:38:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:25.040 19:38:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:25.040 19:38:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:25.040 19:38:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:25.040 19:38:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.040 19:38:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.040 19:38:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.040 19:38:16 -- paths/export.sh@5 -- # export PATH 00:02:25.040 19:38:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.040 19:38:16 -- nvmf/common.sh@47 -- # : 0 00:02:25.040 19:38:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:25.040 19:38:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:25.041 19:38:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:25.041 19:38:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:25.041 19:38:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:25.041 19:38:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:25.041 19:38:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:25.041 19:38:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:25.041 19:38:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:25.041 19:38:16 -- spdk/autotest.sh@32 -- # uname -s 00:02:25.041 19:38:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:25.041 19:38:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:25.041 19:38:16 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:25.041 19:38:16 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:25.041 19:38:16 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:25.041 19:38:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:25.041 19:38:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:25.041 19:38:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:25.041 19:38:16 -- spdk/autotest.sh@48 -- # udevadm_pid=1822771 00:02:25.041 19:38:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:25.041 19:38:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:25.041 19:38:16 -- pm/common@17 -- # local monitor 00:02:25.041 19:38:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.041 19:38:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.041 19:38:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.041 19:38:16 -- pm/common@21 -- # date +%s 00:02:25.041 19:38:16 -- pm/common@21 -- # date +%s 00:02:25.041 19:38:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.041 19:38:16 -- pm/common@25 -- # sleep 1 00:02:25.041 19:38:16 -- pm/common@21 -- # date +%s 00:02:25.041 19:38:16 -- pm/common@21 -- # date +%s 00:02:25.041 19:38:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721842696 00:02:25.041 19:38:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721842696 00:02:25.041 19:38:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721842696 00:02:25.041 19:38:16 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721842696 00:02:25.041 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721842696_collect-vmstat.pm.log 00:02:25.041 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721842696_collect-cpu-temp.pm.log 00:02:25.041 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721842696_collect-cpu-load.pm.log 00:02:25.041 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721842696_collect-bmc-pm.bmc.pm.log 00:02:26.420 19:38:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:26.420 19:38:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:26.420 19:38:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:26.420 19:38:17 -- common/autotest_common.sh@10 -- # set +x 00:02:26.420 19:38:17 -- spdk/autotest.sh@59 -- # create_test_list 00:02:26.420 19:38:17 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:26.420 19:38:17 -- common/autotest_common.sh@10 -- # set +x 00:02:26.420 19:38:17 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:26.420 19:38:17 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.420 19:38:17 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.420 19:38:17 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:26.420 19:38:17 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.420 19:38:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:26.420 19:38:17 -- common/autotest_common.sh@1455 -- # uname 00:02:26.420 19:38:17 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:26.420 19:38:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:26.420 19:38:17 -- common/autotest_common.sh@1475 -- # uname 00:02:26.420 19:38:17 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:26.420 19:38:17 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:26.420 19:38:17 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:26.420 19:38:17 -- spdk/autotest.sh@72 -- # hash lcov 00:02:26.420 19:38:17 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:26.420 19:38:17 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:26.420 --rc lcov_branch_coverage=1 00:02:26.420 --rc lcov_function_coverage=1 00:02:26.420 --rc genhtml_branch_coverage=1 00:02:26.421 --rc genhtml_function_coverage=1 00:02:26.421 --rc genhtml_legend=1 00:02:26.421 --rc geninfo_all_blocks=1 00:02:26.421 ' 00:02:26.421 19:38:17 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:26.421 --rc lcov_branch_coverage=1 00:02:26.421 --rc lcov_function_coverage=1 00:02:26.421 --rc genhtml_branch_coverage=1 00:02:26.421 --rc genhtml_function_coverage=1 00:02:26.421 --rc genhtml_legend=1 00:02:26.421 --rc geninfo_all_blocks=1 00:02:26.421 ' 00:02:26.421 19:38:17 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:26.421 --rc lcov_branch_coverage=1 00:02:26.421 --rc lcov_function_coverage=1 00:02:26.421 --rc genhtml_branch_coverage=1 00:02:26.421 --rc genhtml_function_coverage=1 00:02:26.421 --rc genhtml_legend=1 00:02:26.421 --rc geninfo_all_blocks=1 00:02:26.421 --no-external' 00:02:26.421 19:38:17 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:26.421 --rc lcov_branch_coverage=1 00:02:26.421 --rc lcov_function_coverage=1 00:02:26.421 --rc genhtml_branch_coverage=1 00:02:26.421 --rc genhtml_function_coverage=1 00:02:26.421 --rc genhtml_legend=1 00:02:26.421 --rc geninfo_all_blocks=1 00:02:26.421 --no-external' 00:02:26.421 19:38:17 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:26.421 lcov: LCOV version 1.14 00:02:26.421 19:38:17 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:38.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:38.631 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:46.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:46.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:46.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:46.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:46.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:46.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:46.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:46.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:46.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:46.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:46.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:46.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:46.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:46.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:46.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:46.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:46.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:46.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:46.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:46.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:47.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:47.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:47.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:47.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:47.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:47.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:47.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:47.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:47.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:47.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:47.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:47.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:47.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:47.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:47.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:47.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:47.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:47.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:47.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:47.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:47.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:47.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:47.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:47.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:47.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:47.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:47.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:47.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:47.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:47.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:50.332 19:38:41 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:50.332 19:38:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:50.332 19:38:41 -- common/autotest_common.sh@10 -- # set +x 00:02:50.332 19:38:41 -- spdk/autotest.sh@91 -- # rm -f 00:02:50.332 19:38:41 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.624 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:53.624 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:53.624 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:53.624 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:53.624 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:53.624 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:53.624 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:53.624 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:53.624 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:53.624 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:53.624 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:53.624 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:53.624 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:53.624 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:53.624 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:53.624 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:53.624 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:53.624 19:38:44 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:53.624 19:38:44 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:53.624 19:38:44 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:53.624 19:38:44 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:53.624 19:38:44 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:53.624 19:38:44 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:53.624 19:38:44 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:53.624 19:38:44 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:53.624 19:38:44 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:53.624 19:38:44 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:53.624 19:38:44 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:53.624 19:38:44 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:53.624 19:38:44 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:53.624 19:38:44 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:53.624 19:38:44 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:53.624 No valid GPT data, bailing 00:02:53.624 19:38:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:53.624 19:38:44 -- scripts/common.sh@391 -- # pt= 00:02:53.624 19:38:44 -- scripts/common.sh@392 -- # return 1 00:02:53.624 19:38:44 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:53.624 1+0 records in 00:02:53.624 1+0 records out 00:02:53.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00230841 s, 454 MB/s 00:02:53.624 19:38:44 -- spdk/autotest.sh@118 -- # sync 00:02:53.625 19:38:44 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:53.625 19:38:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:53.625 19:38:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:58.908 19:38:50 -- spdk/autotest.sh@124 -- # uname -s 00:02:58.908 19:38:50 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:58.908 19:38:50 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:58.908 19:38:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:58.908 19:38:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:58.908 19:38:50 -- common/autotest_common.sh@10 -- # set +x 00:02:58.908 ************************************ 00:02:58.908 START TEST setup.sh 00:02:58.908 ************************************ 00:02:58.908 19:38:50 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:58.908 * Looking for test storage... 00:02:58.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:58.908 19:38:50 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:58.908 19:38:50 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:58.908 19:38:50 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:58.908 19:38:50 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:58.908 19:38:50 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:58.908 19:38:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:58.908 ************************************ 00:02:58.908 START TEST acl 00:02:58.908 ************************************ 00:02:58.908 19:38:50 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:58.908 * Looking for test storage... 00:02:58.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:58.908 19:38:50 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:58.908 19:38:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:58.908 19:38:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:58.908 19:38:50 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:58.908 19:38:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:58.908 19:38:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:58.908 19:38:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:58.908 19:38:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:58.908 19:38:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:58.908 19:38:50 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:58.908 19:38:50 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:58.908 19:38:50 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:58.908 19:38:50 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:58.908 19:38:50 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:58.908 19:38:50 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:58.908 19:38:50 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.191 19:38:53 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:02.191 19:38:53 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:02.191 19:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.191 19:38:53 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:02.191 19:38:53 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.191 19:38:53 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:04.726 Hugepages 00:03:04.726 node hugesize free / total 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.726 00:03:04.726 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.726 19:38:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.726 19:38:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:04.727 19:38:56 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:04.727 19:38:56 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:04.727 19:38:56 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:04.727 19:38:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:04.727 ************************************ 00:03:04.727 START TEST denied 00:03:04.727 ************************************ 00:03:04.727 19:38:56 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:04.727 19:38:56 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:04.727 19:38:56 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:04.727 19:38:56 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:04.727 19:38:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.727 19:38:56 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:08.014 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:08.014 19:38:58 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:08.014 19:38:58 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:08.014 19:38:58 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:08.014 19:38:58 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:08.014 19:38:58 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:08.014 19:38:58 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:08.015 19:38:58 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:08.015 19:38:58 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:08.015 19:38:58 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.015 19:38:58 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.333 00:03:11.333 real 0m6.707s 00:03:11.333 user 0m2.176s 00:03:11.333 sys 0m3.894s 00:03:11.333 19:39:02 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:11.333 19:39:02 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:11.333 ************************************ 00:03:11.333 END TEST denied 00:03:11.333 ************************************ 00:03:11.333 19:39:02 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:11.333 19:39:02 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:11.333 19:39:02 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:11.333 19:39:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:11.594 ************************************ 00:03:11.594 START TEST allowed 00:03:11.594 ************************************ 00:03:11.594 19:39:02 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:11.594 19:39:02 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:11.594 19:39:02 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:11.594 19:39:02 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:11.594 19:39:02 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.594 19:39:02 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:15.786 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.786 19:39:06 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:15.786 19:39:06 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:15.786 19:39:06 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:15.786 19:39:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.786 19:39:06 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.320 00:03:18.320 real 0m6.636s 00:03:18.320 user 0m2.086s 00:03:18.320 sys 0m3.707s 00:03:18.320 19:39:09 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:18.320 19:39:09 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:18.320 ************************************ 00:03:18.320 END TEST allowed 00:03:18.320 ************************************ 00:03:18.320 00:03:18.320 real 0m19.276s 00:03:18.320 user 0m6.492s 00:03:18.320 sys 0m11.511s 00:03:18.320 19:39:09 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:18.320 19:39:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:18.320 ************************************ 00:03:18.320 END TEST acl 00:03:18.320 ************************************ 00:03:18.320 19:39:09 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.320 19:39:09 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:18.320 19:39:09 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:18.320 19:39:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.320 ************************************ 00:03:18.320 START TEST hugepages 00:03:18.320 ************************************ 00:03:18.320 19:39:09 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.320 * Looking for test storage... 00:03:18.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.320 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 168360232 kB' 'MemAvailable: 171593464 kB' 'Buffers: 3896 kB' 'Cached: 14665388 kB' 'SwapCached: 0 kB' 'Active: 11529364 kB' 'Inactive: 3694312 kB' 'Active(anon): 11111408 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557788 kB' 'Mapped: 183152 kB' 'Shmem: 10557016 kB' 'KReclaimable: 530356 kB' 'Slab: 1185804 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 655448 kB' 'KernelStack: 20960 kB' 'PageTables: 9708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982020 kB' 'Committed_AS: 12655684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317180 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.321 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:18.322 19:39:09 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:18.322 19:39:09 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:18.322 19:39:09 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:18.322 19:39:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.322 ************************************ 00:03:18.322 START TEST default_setup 00:03:18.322 ************************************ 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.323 19:39:09 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.854 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:20.854 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:20.854 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:20.854 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:20.854 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:20.854 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:20.854 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:20.854 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:20.854 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:20.854 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:20.854 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:20.854 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:20.854 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:20.854 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:20.854 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:20.854 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:21.422 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.422 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170552160 kB' 'MemAvailable: 173785392 kB' 'Buffers: 3896 kB' 'Cached: 14665484 kB' 'SwapCached: 0 kB' 'Active: 11536632 kB' 'Inactive: 3694312 kB' 'Active(anon): 11118676 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564404 kB' 'Mapped: 182696 kB' 'Shmem: 10557112 kB' 'KReclaimable: 530356 kB' 'Slab: 1184860 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 654504 kB' 'KernelStack: 20768 kB' 'PageTables: 9488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12661420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317240 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:21.423 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.423 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.690 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170547636 kB' 'MemAvailable: 173780868 kB' 'Buffers: 3896 kB' 'Cached: 14665488 kB' 'SwapCached: 0 kB' 'Active: 11540032 kB' 'Inactive: 3694312 kB' 'Active(anon): 11122076 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568200 kB' 'Mapped: 183120 kB' 'Shmem: 10557116 kB' 'KReclaimable: 530356 kB' 'Slab: 1184804 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 654448 kB' 'KernelStack: 20720 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12665268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317240 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170546260 kB' 'MemAvailable: 173779492 kB' 'Buffers: 3896 kB' 'Cached: 14665504 kB' 'SwapCached: 0 kB' 'Active: 11536484 kB' 'Inactive: 3694312 kB' 'Active(anon): 11118528 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564648 kB' 'Mapped: 183004 kB' 'Shmem: 10557132 kB' 'KReclaimable: 530356 kB' 'Slab: 1184804 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 654448 kB' 'KernelStack: 20864 kB' 'PageTables: 9988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12660500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317256 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.694 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.695 nr_hugepages=1024 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.695 resv_hugepages=0 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.695 surplus_hugepages=0 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.695 anon_hugepages=0 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170546996 kB' 'MemAvailable: 173780228 kB' 'Buffers: 3896 kB' 'Cached: 14665524 kB' 'SwapCached: 0 kB' 'Active: 11536068 kB' 'Inactive: 3694312 kB' 'Active(anon): 11118112 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564200 kB' 'Mapped: 182616 kB' 'Shmem: 10557152 kB' 'KReclaimable: 530356 kB' 'Slab: 1184804 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 654448 kB' 'KernelStack: 20624 kB' 'PageTables: 9256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12660524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317192 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.695 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.696 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91501012 kB' 'MemUsed: 6114616 kB' 'SwapCached: 0 kB' 'Active: 1979772 kB' 'Inactive: 217236 kB' 'Active(anon): 1817948 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 217236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2048804 kB' 'Mapped: 82860 kB' 'AnonPages: 151536 kB' 'Shmem: 1669744 kB' 'KernelStack: 10776 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 346420 kB' 'Slab: 663392 kB' 'SReclaimable: 346420 kB' 'SUnreclaim: 316972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.697 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:21.698 node0=1024 expecting 1024 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:21.698 00:03:21.698 real 0m3.316s 00:03:21.698 user 0m0.945s 00:03:21.698 sys 0m1.465s 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:21.698 19:39:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:21.698 ************************************ 00:03:21.698 END TEST default_setup 00:03:21.698 ************************************ 00:03:21.698 19:39:13 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:21.698 19:39:13 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:21.698 19:39:13 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:21.698 19:39:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:21.698 ************************************ 00:03:21.698 START TEST per_node_1G_alloc 00:03:21.698 ************************************ 00:03:21.698 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:21.698 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:21.698 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.699 19:39:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.242 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.242 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.242 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.242 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.242 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.242 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.242 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.242 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.242 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.242 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.242 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.242 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.242 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.242 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.242 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.242 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.242 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170588052 kB' 'MemAvailable: 173821284 kB' 'Buffers: 3896 kB' 'Cached: 14665616 kB' 'SwapCached: 0 kB' 'Active: 11536412 kB' 'Inactive: 3694312 kB' 'Active(anon): 11118456 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564472 kB' 'Mapped: 182684 kB' 'Shmem: 10557244 kB' 'KReclaimable: 530356 kB' 'Slab: 1185148 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 654792 kB' 'KernelStack: 20672 kB' 'PageTables: 9036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12659652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317416 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.242 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.243 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170595380 kB' 'MemAvailable: 173828612 kB' 'Buffers: 3896 kB' 'Cached: 14665620 kB' 'SwapCached: 0 kB' 'Active: 11536332 kB' 'Inactive: 3694312 kB' 'Active(anon): 11118376 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564404 kB' 'Mapped: 182680 kB' 'Shmem: 10557248 kB' 'KReclaimable: 530356 kB' 'Slab: 1185144 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 654788 kB' 'KernelStack: 20720 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12661164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317272 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.244 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.245 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170595188 kB' 'MemAvailable: 173828420 kB' 'Buffers: 3896 kB' 'Cached: 14665636 kB' 'SwapCached: 0 kB' 'Active: 11536424 kB' 'Inactive: 3694312 kB' 'Active(anon): 11118468 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564444 kB' 'Mapped: 182680 kB' 'Shmem: 10557264 kB' 'KReclaimable: 530356 kB' 'Slab: 1185144 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 654788 kB' 'KernelStack: 20704 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12661188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317272 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.246 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.247 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.332 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.600 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.600 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.600 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.600 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.600 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.600 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.600 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.600 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.600 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.600 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.600 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.600 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.601 nr_hugepages=1024 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.601 resv_hugepages=0 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.601 surplus_hugepages=0 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.601 anon_hugepages=0 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170596452 kB' 'MemAvailable: 173829684 kB' 'Buffers: 3896 kB' 'Cached: 14665636 kB' 'SwapCached: 0 kB' 'Active: 11536392 kB' 'Inactive: 3694312 kB' 'Active(anon): 11118436 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564412 kB' 'Mapped: 182680 kB' 'Shmem: 10557264 kB' 'KReclaimable: 530356 kB' 'Slab: 1185144 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 654788 kB' 'KernelStack: 20736 kB' 'PageTables: 9420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12659716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317352 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.601 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92583832 kB' 'MemUsed: 5031796 kB' 'SwapCached: 0 kB' 'Active: 1979412 kB' 'Inactive: 217236 kB' 'Active(anon): 1817588 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 217236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2048812 kB' 'Mapped: 82864 kB' 'AnonPages: 150968 kB' 'Shmem: 1669752 kB' 'KernelStack: 10600 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 346420 kB' 'Slab: 663544 kB' 'SReclaimable: 346420 kB' 'SUnreclaim: 317124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.602 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 78017564 kB' 'MemUsed: 15747944 kB' 'SwapCached: 0 kB' 'Active: 9556916 kB' 'Inactive: 3477076 kB' 'Active(anon): 9300784 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477076 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12620788 kB' 'Mapped: 99816 kB' 'AnonPages: 413268 kB' 'Shmem: 8887580 kB' 'KernelStack: 10040 kB' 'PageTables: 5348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183936 kB' 'Slab: 521592 kB' 'SReclaimable: 183936 kB' 'SUnreclaim: 337656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.603 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:24.604 node0=512 expecting 512 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:24.604 node1=512 expecting 512 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:24.604 00:03:24.604 real 0m2.684s 00:03:24.604 user 0m1.049s 00:03:24.604 sys 0m1.590s 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:24.604 19:39:15 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:24.604 ************************************ 00:03:24.604 END TEST per_node_1G_alloc 00:03:24.604 ************************************ 00:03:24.604 19:39:15 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:24.604 19:39:15 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:24.604 19:39:15 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:24.604 19:39:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.604 ************************************ 00:03:24.604 START TEST even_2G_alloc 00:03:24.604 ************************************ 00:03:24.604 19:39:15 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:24.604 19:39:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:24.604 19:39:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.604 19:39:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.604 19:39:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.604 19:39:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.604 19:39:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.604 19:39:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.147 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.147 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.147 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.147 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.147 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.147 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.147 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.147 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.147 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.147 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.147 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.147 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.147 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.147 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.147 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.147 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.147 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170618552 kB' 'MemAvailable: 173851784 kB' 'Buffers: 3896 kB' 'Cached: 14665776 kB' 'SwapCached: 0 kB' 'Active: 11533452 kB' 'Inactive: 3694312 kB' 'Active(anon): 11115496 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561244 kB' 'Mapped: 181600 kB' 'Shmem: 10557404 kB' 'KReclaimable: 530356 kB' 'Slab: 1184020 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 653664 kB' 'KernelStack: 20688 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12645368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317240 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.147 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.148 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170618404 kB' 'MemAvailable: 173851636 kB' 'Buffers: 3896 kB' 'Cached: 14665780 kB' 'SwapCached: 0 kB' 'Active: 11533348 kB' 'Inactive: 3694312 kB' 'Active(anon): 11115392 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561184 kB' 'Mapped: 181584 kB' 'Shmem: 10557408 kB' 'KReclaimable: 530356 kB' 'Slab: 1184064 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 653708 kB' 'KernelStack: 20624 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12645384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317208 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.149 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.150 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170618272 kB' 'MemAvailable: 173851504 kB' 'Buffers: 3896 kB' 'Cached: 14665796 kB' 'SwapCached: 0 kB' 'Active: 11533200 kB' 'Inactive: 3694312 kB' 'Active(anon): 11115244 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561036 kB' 'Mapped: 181584 kB' 'Shmem: 10557424 kB' 'KReclaimable: 530356 kB' 'Slab: 1184064 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 653708 kB' 'KernelStack: 20608 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12643912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.151 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.152 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.152 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.152 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.152 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.152 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.152 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.153 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.154 nr_hugepages=1024 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.154 resv_hugepages=0 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.154 surplus_hugepages=0 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.154 anon_hugepages=0 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170616132 kB' 'MemAvailable: 173849364 kB' 'Buffers: 3896 kB' 'Cached: 14665796 kB' 'SwapCached: 0 kB' 'Active: 11533640 kB' 'Inactive: 3694312 kB' 'Active(anon): 11115684 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561532 kB' 'Mapped: 181584 kB' 'Shmem: 10557424 kB' 'KReclaimable: 530356 kB' 'Slab: 1184064 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 653708 kB' 'KernelStack: 20752 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12645428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317224 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.154 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.155 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92594612 kB' 'MemUsed: 5021016 kB' 'SwapCached: 0 kB' 'Active: 1975912 kB' 'Inactive: 217236 kB' 'Active(anon): 1814088 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 217236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2048812 kB' 'Mapped: 82532 kB' 'AnonPages: 147488 kB' 'Shmem: 1669752 kB' 'KernelStack: 10648 kB' 'PageTables: 3472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 346420 kB' 'Slab: 662656 kB' 'SReclaimable: 346420 kB' 'SUnreclaim: 316236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.156 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.157 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 78021656 kB' 'MemUsed: 15743852 kB' 'SwapCached: 0 kB' 'Active: 9557208 kB' 'Inactive: 3477076 kB' 'Active(anon): 9301076 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477076 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12620944 kB' 'Mapped: 99052 kB' 'AnonPages: 413524 kB' 'Shmem: 8887736 kB' 'KernelStack: 10040 kB' 'PageTables: 5360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183936 kB' 'Slab: 521408 kB' 'SReclaimable: 183936 kB' 'SUnreclaim: 337472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.418 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:27.419 node0=512 expecting 512 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:27.419 node1=512 expecting 512 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:27.419 00:03:27.419 real 0m2.774s 00:03:27.419 user 0m1.123s 00:03:27.419 sys 0m1.688s 00:03:27.419 19:39:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:27.420 19:39:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:27.420 ************************************ 00:03:27.420 END TEST even_2G_alloc 00:03:27.420 ************************************ 00:03:27.420 19:39:18 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:27.420 19:39:18 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:27.420 19:39:18 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:27.420 19:39:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:27.420 ************************************ 00:03:27.420 START TEST odd_alloc 00:03:27.420 ************************************ 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.420 19:39:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.958 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.958 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.958 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.958 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.958 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.958 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.958 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.958 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.958 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.958 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.958 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.958 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.958 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.958 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.958 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.958 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.958 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.958 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:29.958 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:29.958 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170590116 kB' 'MemAvailable: 173823348 kB' 'Buffers: 3896 kB' 'Cached: 14665920 kB' 'SwapCached: 0 kB' 'Active: 11533048 kB' 'Inactive: 3694312 kB' 'Active(anon): 11115092 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560196 kB' 'Mapped: 181712 kB' 'Shmem: 10557548 kB' 'KReclaimable: 530356 kB' 'Slab: 1183916 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 653560 kB' 'KernelStack: 20768 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12645964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317368 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.960 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.228 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170591188 kB' 'MemAvailable: 173824420 kB' 'Buffers: 3896 kB' 'Cached: 14665920 kB' 'SwapCached: 0 kB' 'Active: 11532772 kB' 'Inactive: 3694312 kB' 'Active(anon): 11114816 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559948 kB' 'Mapped: 181672 kB' 'Shmem: 10557548 kB' 'KReclaimable: 530356 kB' 'Slab: 1183916 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 653560 kB' 'KernelStack: 20912 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12645840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317352 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.229 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170590992 kB' 'MemAvailable: 173824224 kB' 'Buffers: 3896 kB' 'Cached: 14665940 kB' 'SwapCached: 0 kB' 'Active: 11533144 kB' 'Inactive: 3694312 kB' 'Active(anon): 11115188 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560784 kB' 'Mapped: 181596 kB' 'Shmem: 10557568 kB' 'KReclaimable: 530356 kB' 'Slab: 1184024 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 653668 kB' 'KernelStack: 20960 kB' 'PageTables: 9536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12643388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317256 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:30.232 nr_hugepages=1025 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.232 resv_hugepages=0 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.232 surplus_hugepages=0 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.232 anon_hugepages=0 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170592220 kB' 'MemAvailable: 173825452 kB' 'Buffers: 3896 kB' 'Cached: 14665960 kB' 'SwapCached: 0 kB' 'Active: 11533364 kB' 'Inactive: 3694312 kB' 'Active(anon): 11115408 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561032 kB' 'Mapped: 182100 kB' 'Shmem: 10557588 kB' 'KReclaimable: 530356 kB' 'Slab: 1184152 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 653796 kB' 'KernelStack: 20544 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12645820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317144 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92572028 kB' 'MemUsed: 5043600 kB' 'SwapCached: 0 kB' 'Active: 1979804 kB' 'Inactive: 217236 kB' 'Active(anon): 1817980 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 217236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2048812 kB' 'Mapped: 82696 kB' 'AnonPages: 151464 kB' 'Shmem: 1669752 kB' 'KernelStack: 10520 kB' 'PageTables: 3292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 346420 kB' 'Slab: 662548 kB' 'SReclaimable: 346420 kB' 'SUnreclaim: 316128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 78012436 kB' 'MemUsed: 15753072 kB' 'SwapCached: 0 kB' 'Active: 9557052 kB' 'Inactive: 3477076 kB' 'Active(anon): 9300920 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477076 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12621088 kB' 'Mapped: 99752 kB' 'AnonPages: 413600 kB' 'Shmem: 8887880 kB' 'KernelStack: 9992 kB' 'PageTables: 5216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183936 kB' 'Slab: 521600 kB' 'SReclaimable: 183936 kB' 'SUnreclaim: 337664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:30.237 node0=512 expecting 513 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:30.237 node1=513 expecting 512 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:30.237 00:03:30.237 real 0m2.899s 00:03:30.237 user 0m1.173s 00:03:30.237 sys 0m1.795s 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:30.237 19:39:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.237 ************************************ 00:03:30.237 END TEST odd_alloc 00:03:30.237 ************************************ 00:03:30.237 19:39:21 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:30.237 19:39:21 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:30.237 19:39:21 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:30.237 19:39:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.237 ************************************ 00:03:30.237 START TEST custom_alloc 00:03:30.237 ************************************ 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:30.237 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.238 19:39:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.819 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.819 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.819 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.819 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.819 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.819 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.819 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.819 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.819 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.819 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.819 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.819 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.819 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.819 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.819 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.819 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.819 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.085 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:33.085 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:33.085 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:33.085 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.085 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.085 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:33.085 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169537456 kB' 'MemAvailable: 172770688 kB' 'Buffers: 3896 kB' 'Cached: 14666080 kB' 'SwapCached: 0 kB' 'Active: 11532324 kB' 'Inactive: 3694312 kB' 'Active(anon): 11114368 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559848 kB' 'Mapped: 181644 kB' 'Shmem: 10557708 kB' 'KReclaimable: 530356 kB' 'Slab: 1184568 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 654212 kB' 'KernelStack: 20560 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12644020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.086 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169537948 kB' 'MemAvailable: 172771180 kB' 'Buffers: 3896 kB' 'Cached: 14666084 kB' 'SwapCached: 0 kB' 'Active: 11532656 kB' 'Inactive: 3694312 kB' 'Active(anon): 11114700 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560160 kB' 'Mapped: 181644 kB' 'Shmem: 10557712 kB' 'KReclaimable: 530356 kB' 'Slab: 1184568 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 654212 kB' 'KernelStack: 20544 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12644040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.087 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.088 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169538028 kB' 'MemAvailable: 172771260 kB' 'Buffers: 3896 kB' 'Cached: 14666096 kB' 'SwapCached: 0 kB' 'Active: 11532768 kB' 'Inactive: 3694312 kB' 'Active(anon): 11114812 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560224 kB' 'Mapped: 181612 kB' 'Shmem: 10557724 kB' 'KReclaimable: 530356 kB' 'Slab: 1184632 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 654276 kB' 'KernelStack: 20560 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12644060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.089 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.090 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:33.091 nr_hugepages=1536 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.091 resv_hugepages=0 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.091 surplus_hugepages=0 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.091 anon_hugepages=0 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169538028 kB' 'MemAvailable: 172771260 kB' 'Buffers: 3896 kB' 'Cached: 14666124 kB' 'SwapCached: 0 kB' 'Active: 11532296 kB' 'Inactive: 3694312 kB' 'Active(anon): 11114340 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559724 kB' 'Mapped: 181612 kB' 'Shmem: 10557752 kB' 'KReclaimable: 530356 kB' 'Slab: 1184632 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 654276 kB' 'KernelStack: 20560 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12644080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.091 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92566928 kB' 'MemUsed: 5048700 kB' 'SwapCached: 0 kB' 'Active: 1974932 kB' 'Inactive: 217236 kB' 'Active(anon): 1813108 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 217236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2048888 kB' 'Mapped: 82560 kB' 'AnonPages: 146408 kB' 'Shmem: 1669828 kB' 'KernelStack: 10520 kB' 'PageTables: 3324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 346420 kB' 'Slab: 663068 kB' 'SReclaimable: 346420 kB' 'SUnreclaim: 316648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 76970848 kB' 'MemUsed: 16794660 kB' 'SwapCached: 0 kB' 'Active: 9557684 kB' 'Inactive: 3477076 kB' 'Active(anon): 9301552 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477076 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12621152 kB' 'Mapped: 99052 kB' 'AnonPages: 413644 kB' 'Shmem: 8887944 kB' 'KernelStack: 10040 kB' 'PageTables: 5360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183936 kB' 'Slab: 521564 kB' 'SReclaimable: 183936 kB' 'SUnreclaim: 337628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:33.096 node0=512 expecting 512 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:33.096 node1=1024 expecting 1024 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:33.096 00:03:33.096 real 0m2.803s 00:03:33.096 user 0m1.143s 00:03:33.096 sys 0m1.725s 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:33.096 19:39:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:33.096 ************************************ 00:03:33.096 END TEST custom_alloc 00:03:33.096 ************************************ 00:03:33.096 19:39:24 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:33.096 19:39:24 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:33.096 19:39:24 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:33.096 19:39:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:33.096 ************************************ 00:03:33.096 START TEST no_shrink_alloc 00:03:33.096 ************************************ 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:33.096 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:33.097 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:33.097 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:33.357 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:33.357 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.357 19:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.906 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.906 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:35.906 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.906 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.906 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.906 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.906 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.906 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.906 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.906 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.906 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.906 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.906 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.906 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.906 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.906 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.906 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170554936 kB' 'MemAvailable: 173788168 kB' 'Buffers: 3896 kB' 'Cached: 14666220 kB' 'SwapCached: 0 kB' 'Active: 11533564 kB' 'Inactive: 3694312 kB' 'Active(anon): 11115608 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560904 kB' 'Mapped: 181720 kB' 'Shmem: 10557848 kB' 'KReclaimable: 530356 kB' 'Slab: 1184380 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 654024 kB' 'KernelStack: 20560 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12644184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317144 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170554684 kB' 'MemAvailable: 173787916 kB' 'Buffers: 3896 kB' 'Cached: 14666220 kB' 'SwapCached: 0 kB' 'Active: 11533204 kB' 'Inactive: 3694312 kB' 'Active(anon): 11115248 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560620 kB' 'Mapped: 181692 kB' 'Shmem: 10557848 kB' 'KReclaimable: 530356 kB' 'Slab: 1184420 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 654064 kB' 'KernelStack: 20560 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12644204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.908 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170555356 kB' 'MemAvailable: 173788588 kB' 'Buffers: 3896 kB' 'Cached: 14666252 kB' 'SwapCached: 0 kB' 'Active: 11533560 kB' 'Inactive: 3694312 kB' 'Active(anon): 11115604 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560988 kB' 'Mapped: 181692 kB' 'Shmem: 10557880 kB' 'KReclaimable: 530356 kB' 'Slab: 1184420 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 654064 kB' 'KernelStack: 20608 kB' 'PageTables: 9204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12644600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317144 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:35.909 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.911 nr_hugepages=1024 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.911 resv_hugepages=0 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.911 surplus_hugepages=0 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.911 anon_hugepages=0 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.911 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170556872 kB' 'MemAvailable: 173790104 kB' 'Buffers: 3896 kB' 'Cached: 14666292 kB' 'SwapCached: 0 kB' 'Active: 11532760 kB' 'Inactive: 3694312 kB' 'Active(anon): 11114804 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560108 kB' 'Mapped: 181628 kB' 'Shmem: 10557920 kB' 'KReclaimable: 530356 kB' 'Slab: 1184420 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 654064 kB' 'KernelStack: 20544 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12644620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317144 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.912 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.913 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91516528 kB' 'MemUsed: 6099100 kB' 'SwapCached: 0 kB' 'Active: 1976156 kB' 'Inactive: 217236 kB' 'Active(anon): 1814332 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 217236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2048968 kB' 'Mapped: 82576 kB' 'AnonPages: 147604 kB' 'Shmem: 1669908 kB' 'KernelStack: 10520 kB' 'PageTables: 3320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 346420 kB' 'Slab: 663044 kB' 'SReclaimable: 346420 kB' 'SUnreclaim: 316624 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.914 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.915 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.175 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.175 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:36.175 node0=1024 expecting 1024 00:03:36.175 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:36.175 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:36.176 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:36.176 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:36.176 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.176 19:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.724 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.724 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:38.724 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.724 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.724 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.724 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.724 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.724 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.724 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.724 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.724 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.724 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.724 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.724 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.724 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.724 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.724 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.724 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.724 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170571524 kB' 'MemAvailable: 173804756 kB' 'Buffers: 3896 kB' 'Cached: 14666356 kB' 'SwapCached: 0 kB' 'Active: 11534756 kB' 'Inactive: 3694312 kB' 'Active(anon): 11116800 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561628 kB' 'Mapped: 181824 kB' 'Shmem: 10557984 kB' 'KReclaimable: 530356 kB' 'Slab: 1184184 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 653828 kB' 'KernelStack: 20576 kB' 'PageTables: 9456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12645276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317096 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.725 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170571584 kB' 'MemAvailable: 173804816 kB' 'Buffers: 3896 kB' 'Cached: 14666360 kB' 'SwapCached: 0 kB' 'Active: 11533960 kB' 'Inactive: 3694312 kB' 'Active(anon): 11116004 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561288 kB' 'Mapped: 181696 kB' 'Shmem: 10557988 kB' 'KReclaimable: 530356 kB' 'Slab: 1184176 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 653820 kB' 'KernelStack: 20608 kB' 'PageTables: 9528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12645292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317080 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:38.726 19:39:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.726 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.727 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170576028 kB' 'MemAvailable: 173809260 kB' 'Buffers: 3896 kB' 'Cached: 14666380 kB' 'SwapCached: 0 kB' 'Active: 11533976 kB' 'Inactive: 3694312 kB' 'Active(anon): 11116020 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561684 kB' 'Mapped: 181696 kB' 'Shmem: 10558008 kB' 'KReclaimable: 530356 kB' 'Slab: 1184176 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 653820 kB' 'KernelStack: 20592 kB' 'PageTables: 9476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12645316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317064 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.728 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.729 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:38.730 nr_hugepages=1024 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.730 resv_hugepages=0 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.730 surplus_hugepages=0 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.730 anon_hugepages=0 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170576744 kB' 'MemAvailable: 173809976 kB' 'Buffers: 3896 kB' 'Cached: 14666420 kB' 'SwapCached: 0 kB' 'Active: 11533828 kB' 'Inactive: 3694312 kB' 'Active(anon): 11115872 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561472 kB' 'Mapped: 181696 kB' 'Shmem: 10558048 kB' 'KReclaimable: 530356 kB' 'Slab: 1184176 kB' 'SReclaimable: 530356 kB' 'SUnreclaim: 653820 kB' 'KernelStack: 20592 kB' 'PageTables: 9476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12645336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317064 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3904468 kB' 'DirectMap2M: 33523712 kB' 'DirectMap1G: 164626432 kB' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.730 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.731 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91531736 kB' 'MemUsed: 6083892 kB' 'SwapCached: 0 kB' 'Active: 1977364 kB' 'Inactive: 217236 kB' 'Active(anon): 1815540 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 217236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2048988 kB' 'Mapped: 82584 kB' 'AnonPages: 149104 kB' 'Shmem: 1669928 kB' 'KernelStack: 10568 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 346420 kB' 'Slab: 662888 kB' 'SReclaimable: 346420 kB' 'SUnreclaim: 316468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.732 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.733 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.734 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.734 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.734 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.734 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.734 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.734 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.734 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.734 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.734 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.734 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.734 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:38.734 node0=1024 expecting 1024 00:03:38.734 19:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:38.734 00:03:38.734 real 0m5.473s 00:03:38.734 user 0m2.158s 00:03:38.734 sys 0m3.344s 00:03:38.734 19:39:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.734 19:39:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:38.734 ************************************ 00:03:38.734 END TEST no_shrink_alloc 00:03:38.734 ************************************ 00:03:38.734 19:39:30 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:38.734 19:39:30 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:38.734 19:39:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:38.734 19:39:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.734 19:39:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.734 19:39:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.734 19:39:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.734 19:39:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:38.734 19:39:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.734 19:39:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.734 19:39:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.734 19:39:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.734 19:39:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:38.734 19:39:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:38.734 00:03:38.734 real 0m20.506s 00:03:38.734 user 0m7.818s 00:03:38.734 sys 0m11.975s 00:03:38.734 19:39:30 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.734 19:39:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:38.734 ************************************ 00:03:38.734 END TEST hugepages 00:03:38.734 ************************************ 00:03:38.734 19:39:30 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:38.734 19:39:30 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.734 19:39:30 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.734 19:39:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.734 ************************************ 00:03:38.734 START TEST driver 00:03:38.734 ************************************ 00:03:38.734 19:39:30 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:38.994 * Looking for test storage... 00:03:38.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:38.994 19:39:30 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:38.994 19:39:30 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.994 19:39:30 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.196 19:39:34 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:43.196 19:39:34 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.196 19:39:34 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.196 19:39:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:43.196 ************************************ 00:03:43.196 START TEST guess_driver 00:03:43.196 ************************************ 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:43.196 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:43.196 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:43.196 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:43.196 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:43.196 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:43.196 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:43.196 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:43.196 Looking for driver=vfio-pci 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.196 19:39:34 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.741 19:39:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.312 19:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:46.312 19:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:46.312 19:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.312 19:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:46.312 19:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:46.312 19:39:37 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.312 19:39:37 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.513 00:03:50.513 real 0m7.373s 00:03:50.513 user 0m2.044s 00:03:50.513 sys 0m3.758s 00:03:50.513 19:39:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:50.513 19:39:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:50.513 ************************************ 00:03:50.513 END TEST guess_driver 00:03:50.513 ************************************ 00:03:50.513 00:03:50.513 real 0m11.400s 00:03:50.513 user 0m3.190s 00:03:50.513 sys 0m5.932s 00:03:50.513 19:39:41 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:50.513 19:39:41 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:50.513 ************************************ 00:03:50.513 END TEST driver 00:03:50.513 ************************************ 00:03:50.513 19:39:41 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:50.513 19:39:41 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:50.513 19:39:41 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:50.513 19:39:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:50.513 ************************************ 00:03:50.513 START TEST devices 00:03:50.513 ************************************ 00:03:50.513 19:39:41 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:50.513 * Looking for test storage... 00:03:50.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:50.513 19:39:41 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:50.513 19:39:41 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:50.513 19:39:41 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.513 19:39:41 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.119 19:39:44 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:53.119 19:39:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:53.119 19:39:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:53.119 19:39:44 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:53.119 19:39:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.119 19:39:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:53.119 19:39:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:53.119 19:39:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.119 19:39:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.119 19:39:44 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:53.119 19:39:44 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:53.119 19:39:44 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:53.119 19:39:44 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:53.119 19:39:44 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:53.119 19:39:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.119 19:39:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:53.119 19:39:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:53.119 19:39:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:53.119 19:39:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:53.119 19:39:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:53.119 19:39:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:53.119 19:39:44 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:53.119 No valid GPT data, bailing 00:03:53.119 19:39:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.119 19:39:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:53.119 19:39:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:53.119 19:39:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:53.119 19:39:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:53.119 19:39:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:53.119 19:39:44 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:53.119 19:39:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:53.379 19:39:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.379 19:39:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:53.379 19:39:44 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:53.379 19:39:44 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:53.379 19:39:44 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:53.379 19:39:44 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.379 19:39:44 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.379 19:39:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:53.380 ************************************ 00:03:53.380 START TEST nvme_mount 00:03:53.380 ************************************ 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:53.380 19:39:44 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:54.320 Creating new GPT entries in memory. 00:03:54.320 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:54.320 other utilities. 00:03:54.320 19:39:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:54.320 19:39:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:54.321 19:39:45 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:54.321 19:39:45 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:54.321 19:39:45 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:55.262 Creating new GPT entries in memory. 00:03:55.262 The operation has completed successfully. 00:03:55.262 19:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:55.262 19:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:55.262 19:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1854548 00:03:55.262 19:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.262 19:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:55.263 19:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.263 19:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:55.263 19:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:55.263 19:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.522 19:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.522 19:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:55.522 19:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:55.522 19:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.522 19:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.522 19:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:55.522 19:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.522 19:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:55.522 19:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:55.522 19:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.522 19:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:55.522 19:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:55.522 19:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.522 19:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:58.060 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:58.060 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:58.060 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:58.060 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:58.060 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:58.060 19:39:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.320 19:39:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.857 19:39:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.394 19:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.654 19:39:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.654 19:39:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:03.654 19:39:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:03.654 19:39:55 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:03.654 19:39:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.654 19:39:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:03.654 19:39:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:03.654 19:39:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:03.654 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:03.654 00:04:03.654 real 0m10.279s 00:04:03.654 user 0m2.907s 00:04:03.654 sys 0m5.122s 00:04:03.654 19:39:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.654 19:39:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:03.654 ************************************ 00:04:03.654 END TEST nvme_mount 00:04:03.654 ************************************ 00:04:03.654 19:39:55 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:03.654 19:39:55 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.654 19:39:55 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.654 19:39:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:03.654 ************************************ 00:04:03.654 START TEST dm_mount 00:04:03.654 ************************************ 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:03.654 19:39:55 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:04.592 Creating new GPT entries in memory. 00:04:04.592 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:04.592 other utilities. 00:04:04.592 19:39:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:04.592 19:39:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.592 19:39:56 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:04.592 19:39:56 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:04.592 19:39:56 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:05.975 Creating new GPT entries in memory. 00:04:05.975 The operation has completed successfully. 00:04:05.975 19:39:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:05.975 19:39:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.975 19:39:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.975 19:39:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.975 19:39:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:06.915 The operation has completed successfully. 00:04:06.915 19:39:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:06.915 19:39:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.915 19:39:58 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1858722 00:04:06.915 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:06.915 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.915 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.915 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:06.915 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:06.915 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.915 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:06.915 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.915 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:06.915 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.916 19:39:58 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.463 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.464 19:40:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.005 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.264 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.264 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:12.264 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:12.264 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:12.264 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:12.264 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:12.264 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:12.264 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.264 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:12.264 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:12.264 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:12.264 19:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:12.264 00:04:12.264 real 0m8.566s 00:04:12.264 user 0m2.040s 00:04:12.264 sys 0m3.578s 00:04:12.264 19:40:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.264 19:40:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:12.264 ************************************ 00:04:12.264 END TEST dm_mount 00:04:12.264 ************************************ 00:04:12.264 19:40:03 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:12.264 19:40:03 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:12.264 19:40:03 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.264 19:40:03 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.264 19:40:03 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:12.264 19:40:03 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:12.264 19:40:03 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:12.524 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:12.524 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:12.524 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:12.524 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:12.524 19:40:03 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:12.524 19:40:03 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:12.524 19:40:03 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:12.524 19:40:03 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.524 19:40:03 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:12.524 19:40:03 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:12.524 19:40:03 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:12.524 00:04:12.524 real 0m22.265s 00:04:12.524 user 0m6.104s 00:04:12.524 sys 0m10.766s 00:04:12.524 19:40:03 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.524 19:40:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:12.524 ************************************ 00:04:12.524 END TEST devices 00:04:12.524 ************************************ 00:04:12.524 00:04:12.524 real 1m13.811s 00:04:12.524 user 0m23.751s 00:04:12.524 sys 0m40.429s 00:04:12.524 19:40:04 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.524 19:40:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:12.524 ************************************ 00:04:12.524 END TEST setup.sh 00:04:12.524 ************************************ 00:04:12.524 19:40:04 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:15.851 Hugepages 00:04:15.851 node hugesize free / total 00:04:15.851 node0 1048576kB 0 / 0 00:04:15.851 node0 2048kB 2048 / 2048 00:04:15.851 node1 1048576kB 0 / 0 00:04:15.851 node1 2048kB 0 / 0 00:04:15.851 00:04:15.851 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.851 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:15.851 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:15.851 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:15.851 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:15.851 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:15.851 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:15.851 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:15.851 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:15.851 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:15.851 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:15.851 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:15.851 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:15.851 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:15.851 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:15.851 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:15.851 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:15.852 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:15.852 19:40:06 -- spdk/autotest.sh@130 -- # uname -s 00:04:15.852 19:40:06 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:15.852 19:40:06 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:15.852 19:40:06 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:18.389 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.389 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.389 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.389 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.389 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.389 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.389 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.389 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:18.389 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.389 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.389 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.389 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.389 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.389 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.389 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.389 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:18.958 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:18.958 19:40:10 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:20.338 19:40:11 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:20.338 19:40:11 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:20.338 19:40:11 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:20.338 19:40:11 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:20.338 19:40:11 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:20.338 19:40:11 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:20.338 19:40:11 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.338 19:40:11 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:20.338 19:40:11 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:20.338 19:40:11 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:20.338 19:40:11 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:20.338 19:40:11 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.876 Waiting for block devices as requested 00:04:22.876 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:22.876 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:22.876 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:22.876 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:23.136 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:23.136 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:23.136 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:23.136 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:23.396 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:23.396 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:23.396 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:23.655 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:23.655 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:23.655 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:23.655 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:23.915 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:23.915 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:23.915 19:40:15 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:23.915 19:40:15 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:23.915 19:40:15 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:23.915 19:40:15 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:23.915 19:40:15 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:23.915 19:40:15 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:23.915 19:40:15 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:23.915 19:40:15 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:23.915 19:40:15 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:23.915 19:40:15 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:23.915 19:40:15 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:23.915 19:40:15 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:23.915 19:40:15 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:23.915 19:40:15 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:23.915 19:40:15 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:23.915 19:40:15 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:23.915 19:40:15 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:23.915 19:40:15 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:23.915 19:40:15 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:23.915 19:40:15 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:23.915 19:40:15 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:23.915 19:40:15 -- common/autotest_common.sh@1557 -- # continue 00:04:23.915 19:40:15 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:23.915 19:40:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:23.915 19:40:15 -- common/autotest_common.sh@10 -- # set +x 00:04:23.915 19:40:15 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:23.915 19:40:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:23.915 19:40:15 -- common/autotest_common.sh@10 -- # set +x 00:04:23.915 19:40:15 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:27.211 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.211 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.211 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.211 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.211 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.211 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.211 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.211 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:27.211 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.211 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.211 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.211 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.211 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.211 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.211 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.211 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:27.781 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:27.781 19:40:19 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:27.781 19:40:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.781 19:40:19 -- common/autotest_common.sh@10 -- # set +x 00:04:27.781 19:40:19 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:27.781 19:40:19 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:27.781 19:40:19 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:27.781 19:40:19 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:27.781 19:40:19 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:27.781 19:40:19 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:27.781 19:40:19 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:27.781 19:40:19 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:27.781 19:40:19 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:27.781 19:40:19 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:27.781 19:40:19 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:27.781 19:40:19 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:27.781 19:40:19 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:28.042 19:40:19 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:28.042 19:40:19 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:28.042 19:40:19 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:28.042 19:40:19 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:28.042 19:40:19 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:28.042 19:40:19 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:04:28.042 19:40:19 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:04:28.042 19:40:19 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1867496 00:04:28.042 19:40:19 -- common/autotest_common.sh@1598 -- # waitforlisten 1867496 00:04:28.042 19:40:19 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.042 19:40:19 -- common/autotest_common.sh@831 -- # '[' -z 1867496 ']' 00:04:28.042 19:40:19 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.042 19:40:19 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:28.042 19:40:19 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.042 19:40:19 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:28.042 19:40:19 -- common/autotest_common.sh@10 -- # set +x 00:04:28.042 [2024-07-24 19:40:19.438646] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:04:28.042 [2024-07-24 19:40:19.438694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1867496 ] 00:04:28.042 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.042 [2024-07-24 19:40:19.493404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.042 [2024-07-24 19:40:19.567784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.980 19:40:20 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.980 19:40:20 -- common/autotest_common.sh@864 -- # return 0 00:04:28.980 19:40:20 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:28.980 19:40:20 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:28.980 19:40:20 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:32.274 nvme0n1 00:04:32.274 19:40:23 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:32.274 [2024-07-24 19:40:23.367595] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:32.274 request: 00:04:32.274 { 00:04:32.274 "nvme_ctrlr_name": "nvme0", 00:04:32.274 "password": "test", 00:04:32.274 "method": "bdev_nvme_opal_revert", 00:04:32.274 "req_id": 1 00:04:32.274 } 00:04:32.274 Got JSON-RPC error response 00:04:32.274 response: 00:04:32.274 { 00:04:32.274 "code": -32602, 00:04:32.274 "message": "Invalid parameters" 00:04:32.274 } 00:04:32.274 19:40:23 -- common/autotest_common.sh@1604 -- # true 00:04:32.274 19:40:23 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:32.274 19:40:23 -- common/autotest_common.sh@1608 -- # killprocess 1867496 00:04:32.274 19:40:23 -- common/autotest_common.sh@950 -- # '[' -z 1867496 ']' 00:04:32.274 19:40:23 -- common/autotest_common.sh@954 -- # kill -0 1867496 00:04:32.274 19:40:23 -- common/autotest_common.sh@955 -- # uname 00:04:32.274 19:40:23 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:32.274 19:40:23 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1867496 00:04:32.274 19:40:23 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:32.274 19:40:23 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:32.274 19:40:23 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1867496' 00:04:32.274 killing process with pid 1867496 00:04:32.274 19:40:23 -- common/autotest_common.sh@969 -- # kill 1867496 00:04:32.274 19:40:23 -- common/autotest_common.sh@974 -- # wait 1867496 00:04:33.753 19:40:25 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:33.753 19:40:25 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:33.753 19:40:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:33.753 19:40:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:33.753 19:40:25 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:33.753 19:40:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:33.753 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:04:33.753 19:40:25 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:33.753 19:40:25 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:33.753 19:40:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.753 19:40:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.753 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:04:33.753 ************************************ 00:04:33.753 START TEST env 00:04:33.753 ************************************ 00:04:33.753 19:40:25 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:33.753 * Looking for test storage... 00:04:33.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:33.753 19:40:25 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:33.753 19:40:25 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.753 19:40:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.753 19:40:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.753 ************************************ 00:04:33.753 START TEST env_memory 00:04:33.754 ************************************ 00:04:33.754 19:40:25 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:33.754 00:04:33.754 00:04:33.754 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.754 http://cunit.sourceforge.net/ 00:04:33.754 00:04:33.754 00:04:33.754 Suite: memory 00:04:33.754 Test: alloc and free memory map ...[2024-07-24 19:40:25.223896] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:33.754 passed 00:04:33.754 Test: mem map translation ...[2024-07-24 19:40:25.242852] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:33.754 [2024-07-24 19:40:25.242867] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:33.754 [2024-07-24 19:40:25.242903] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:33.754 [2024-07-24 19:40:25.242911] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:33.754 passed 00:04:33.754 Test: mem map registration ...[2024-07-24 19:40:25.279669] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:33.754 [2024-07-24 19:40:25.279683] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:33.754 passed 00:04:33.754 Test: mem map adjacent registrations ...passed 00:04:33.754 00:04:33.754 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.754 suites 1 1 n/a 0 0 00:04:33.754 tests 4 4 4 0 0 00:04:33.754 asserts 152 152 152 0 n/a 00:04:33.754 00:04:33.754 Elapsed time = 0.134 seconds 00:04:33.754 00:04:33.754 real 0m0.146s 00:04:33.754 user 0m0.136s 00:04:33.754 sys 0m0.009s 00:04:33.754 19:40:25 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.754 19:40:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:33.754 ************************************ 00:04:33.754 END TEST env_memory 00:04:33.754 ************************************ 00:04:34.014 19:40:25 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:34.014 19:40:25 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.014 19:40:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.014 19:40:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.014 ************************************ 00:04:34.014 START TEST env_vtophys 00:04:34.014 ************************************ 00:04:34.014 19:40:25 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:34.014 EAL: lib.eal log level changed from notice to debug 00:04:34.014 EAL: Detected lcore 0 as core 0 on socket 0 00:04:34.014 EAL: Detected lcore 1 as core 1 on socket 0 00:04:34.014 EAL: Detected lcore 2 as core 2 on socket 0 00:04:34.014 EAL: Detected lcore 3 as core 3 on socket 0 00:04:34.014 EAL: Detected lcore 4 as core 4 on socket 0 00:04:34.014 EAL: Detected lcore 5 as core 5 on socket 0 00:04:34.014 EAL: Detected lcore 6 as core 6 on socket 0 00:04:34.014 EAL: Detected lcore 7 as core 8 on socket 0 00:04:34.014 EAL: Detected lcore 8 as core 9 on socket 0 00:04:34.014 EAL: Detected lcore 9 as core 10 on socket 0 00:04:34.014 EAL: Detected lcore 10 as core 11 on socket 0 00:04:34.014 EAL: Detected lcore 11 as core 12 on socket 0 00:04:34.014 EAL: Detected lcore 12 as core 13 on socket 0 00:04:34.014 EAL: Detected lcore 13 as core 16 on socket 0 00:04:34.014 EAL: Detected lcore 14 as core 17 on socket 0 00:04:34.014 EAL: Detected lcore 15 as core 18 on socket 0 00:04:34.014 EAL: Detected lcore 16 as core 19 on socket 0 00:04:34.014 EAL: Detected lcore 17 as core 20 on socket 0 00:04:34.014 EAL: Detected lcore 18 as core 21 on socket 0 00:04:34.014 EAL: Detected lcore 19 as core 25 on socket 0 00:04:34.014 EAL: Detected lcore 20 as core 26 on socket 0 00:04:34.014 EAL: Detected lcore 21 as core 27 on socket 0 00:04:34.014 EAL: Detected lcore 22 as core 28 on socket 0 00:04:34.014 EAL: Detected lcore 23 as core 29 on socket 0 00:04:34.014 EAL: Detected lcore 24 as core 0 on socket 1 00:04:34.014 EAL: Detected lcore 25 as core 1 on socket 1 00:04:34.014 EAL: Detected lcore 26 as core 2 on socket 1 00:04:34.014 EAL: Detected lcore 27 as core 3 on socket 1 00:04:34.014 EAL: Detected lcore 28 as core 4 on socket 1 00:04:34.014 EAL: Detected lcore 29 as core 5 on socket 1 00:04:34.014 EAL: Detected lcore 30 as core 6 on socket 1 00:04:34.014 EAL: Detected lcore 31 as core 9 on socket 1 00:04:34.014 EAL: Detected lcore 32 as core 10 on socket 1 00:04:34.014 EAL: Detected lcore 33 as core 11 on socket 1 00:04:34.014 EAL: Detected lcore 34 as core 12 on socket 1 00:04:34.014 EAL: Detected lcore 35 as core 13 on socket 1 00:04:34.014 EAL: Detected lcore 36 as core 16 on socket 1 00:04:34.014 EAL: Detected lcore 37 as core 17 on socket 1 00:04:34.014 EAL: Detected lcore 38 as core 18 on socket 1 00:04:34.014 EAL: Detected lcore 39 as core 19 on socket 1 00:04:34.014 EAL: Detected lcore 40 as core 20 on socket 1 00:04:34.014 EAL: Detected lcore 41 as core 21 on socket 1 00:04:34.014 EAL: Detected lcore 42 as core 24 on socket 1 00:04:34.014 EAL: Detected lcore 43 as core 25 on socket 1 00:04:34.014 EAL: Detected lcore 44 as core 26 on socket 1 00:04:34.014 EAL: Detected lcore 45 as core 27 on socket 1 00:04:34.014 EAL: Detected lcore 46 as core 28 on socket 1 00:04:34.014 EAL: Detected lcore 47 as core 29 on socket 1 00:04:34.014 EAL: Detected lcore 48 as core 0 on socket 0 00:04:34.014 EAL: Detected lcore 49 as core 1 on socket 0 00:04:34.014 EAL: Detected lcore 50 as core 2 on socket 0 00:04:34.014 EAL: Detected lcore 51 as core 3 on socket 0 00:04:34.014 EAL: Detected lcore 52 as core 4 on socket 0 00:04:34.014 EAL: Detected lcore 53 as core 5 on socket 0 00:04:34.014 EAL: Detected lcore 54 as core 6 on socket 0 00:04:34.014 EAL: Detected lcore 55 as core 8 on socket 0 00:04:34.014 EAL: Detected lcore 56 as core 9 on socket 0 00:04:34.014 EAL: Detected lcore 57 as core 10 on socket 0 00:04:34.014 EAL: Detected lcore 58 as core 11 on socket 0 00:04:34.014 EAL: Detected lcore 59 as core 12 on socket 0 00:04:34.014 EAL: Detected lcore 60 as core 13 on socket 0 00:04:34.015 EAL: Detected lcore 61 as core 16 on socket 0 00:04:34.015 EAL: Detected lcore 62 as core 17 on socket 0 00:04:34.015 EAL: Detected lcore 63 as core 18 on socket 0 00:04:34.015 EAL: Detected lcore 64 as core 19 on socket 0 00:04:34.015 EAL: Detected lcore 65 as core 20 on socket 0 00:04:34.015 EAL: Detected lcore 66 as core 21 on socket 0 00:04:34.015 EAL: Detected lcore 67 as core 25 on socket 0 00:04:34.015 EAL: Detected lcore 68 as core 26 on socket 0 00:04:34.015 EAL: Detected lcore 69 as core 27 on socket 0 00:04:34.015 EAL: Detected lcore 70 as core 28 on socket 0 00:04:34.015 EAL: Detected lcore 71 as core 29 on socket 0 00:04:34.015 EAL: Detected lcore 72 as core 0 on socket 1 00:04:34.015 EAL: Detected lcore 73 as core 1 on socket 1 00:04:34.015 EAL: Detected lcore 74 as core 2 on socket 1 00:04:34.015 EAL: Detected lcore 75 as core 3 on socket 1 00:04:34.015 EAL: Detected lcore 76 as core 4 on socket 1 00:04:34.015 EAL: Detected lcore 77 as core 5 on socket 1 00:04:34.015 EAL: Detected lcore 78 as core 6 on socket 1 00:04:34.015 EAL: Detected lcore 79 as core 9 on socket 1 00:04:34.015 EAL: Detected lcore 80 as core 10 on socket 1 00:04:34.015 EAL: Detected lcore 81 as core 11 on socket 1 00:04:34.015 EAL: Detected lcore 82 as core 12 on socket 1 00:04:34.015 EAL: Detected lcore 83 as core 13 on socket 1 00:04:34.015 EAL: Detected lcore 84 as core 16 on socket 1 00:04:34.015 EAL: Detected lcore 85 as core 17 on socket 1 00:04:34.015 EAL: Detected lcore 86 as core 18 on socket 1 00:04:34.015 EAL: Detected lcore 87 as core 19 on socket 1 00:04:34.015 EAL: Detected lcore 88 as core 20 on socket 1 00:04:34.015 EAL: Detected lcore 89 as core 21 on socket 1 00:04:34.015 EAL: Detected lcore 90 as core 24 on socket 1 00:04:34.015 EAL: Detected lcore 91 as core 25 on socket 1 00:04:34.015 EAL: Detected lcore 92 as core 26 on socket 1 00:04:34.015 EAL: Detected lcore 93 as core 27 on socket 1 00:04:34.015 EAL: Detected lcore 94 as core 28 on socket 1 00:04:34.015 EAL: Detected lcore 95 as core 29 on socket 1 00:04:34.015 EAL: Maximum logical cores by configuration: 128 00:04:34.015 EAL: Detected CPU lcores: 96 00:04:34.015 EAL: Detected NUMA nodes: 2 00:04:34.015 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:34.015 EAL: Detected shared linkage of DPDK 00:04:34.015 EAL: No shared files mode enabled, IPC will be disabled 00:04:34.015 EAL: Bus pci wants IOVA as 'DC' 00:04:34.015 EAL: Buses did not request a specific IOVA mode. 00:04:34.015 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:34.015 EAL: Selected IOVA mode 'VA' 00:04:34.015 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.015 EAL: Probing VFIO support... 00:04:34.015 EAL: IOMMU type 1 (Type 1) is supported 00:04:34.015 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:34.015 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:34.015 EAL: VFIO support initialized 00:04:34.015 EAL: Ask a virtual area of 0x2e000 bytes 00:04:34.015 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:34.015 EAL: Setting up physically contiguous memory... 00:04:34.015 EAL: Setting maximum number of open files to 524288 00:04:34.015 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:34.015 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:34.015 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:34.015 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.015 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:34.015 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.015 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.015 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:34.015 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:34.015 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.015 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:34.015 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.015 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.015 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:34.015 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:34.015 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.015 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:34.015 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.015 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.015 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:34.015 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:34.015 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.015 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:34.015 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.015 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.015 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:34.015 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:34.015 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:34.015 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.015 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:34.015 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:34.015 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.015 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:34.015 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:34.015 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.015 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:34.015 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:34.015 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.015 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:34.015 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:34.015 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.015 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:34.015 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:34.015 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.015 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:34.015 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:34.015 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.015 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:34.015 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:34.015 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.015 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:34.015 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:34.015 EAL: Hugepages will be freed exactly as allocated. 00:04:34.015 EAL: No shared files mode enabled, IPC is disabled 00:04:34.015 EAL: No shared files mode enabled, IPC is disabled 00:04:34.015 EAL: TSC frequency is ~2300000 KHz 00:04:34.015 EAL: Main lcore 0 is ready (tid=7fec7f019a00;cpuset=[0]) 00:04:34.015 EAL: Trying to obtain current memory policy. 00:04:34.015 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.015 EAL: Restoring previous memory policy: 0 00:04:34.015 EAL: request: mp_malloc_sync 00:04:34.015 EAL: No shared files mode enabled, IPC is disabled 00:04:34.015 EAL: Heap on socket 0 was expanded by 2MB 00:04:34.015 EAL: No shared files mode enabled, IPC is disabled 00:04:34.015 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:34.015 EAL: Mem event callback 'spdk:(nil)' registered 00:04:34.015 00:04:34.015 00:04:34.015 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.015 http://cunit.sourceforge.net/ 00:04:34.015 00:04:34.015 00:04:34.015 Suite: components_suite 00:04:34.015 Test: vtophys_malloc_test ...passed 00:04:34.015 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:34.015 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.015 EAL: Restoring previous memory policy: 4 00:04:34.015 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.015 EAL: request: mp_malloc_sync 00:04:34.015 EAL: No shared files mode enabled, IPC is disabled 00:04:34.015 EAL: Heap on socket 0 was expanded by 4MB 00:04:34.015 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.015 EAL: request: mp_malloc_sync 00:04:34.015 EAL: No shared files mode enabled, IPC is disabled 00:04:34.015 EAL: Heap on socket 0 was shrunk by 4MB 00:04:34.015 EAL: Trying to obtain current memory policy. 00:04:34.015 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.015 EAL: Restoring previous memory policy: 4 00:04:34.015 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.015 EAL: request: mp_malloc_sync 00:04:34.015 EAL: No shared files mode enabled, IPC is disabled 00:04:34.015 EAL: Heap on socket 0 was expanded by 6MB 00:04:34.015 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.015 EAL: request: mp_malloc_sync 00:04:34.015 EAL: No shared files mode enabled, IPC is disabled 00:04:34.015 EAL: Heap on socket 0 was shrunk by 6MB 00:04:34.015 EAL: Trying to obtain current memory policy. 00:04:34.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.016 EAL: Restoring previous memory policy: 4 00:04:34.016 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.016 EAL: request: mp_malloc_sync 00:04:34.016 EAL: No shared files mode enabled, IPC is disabled 00:04:34.016 EAL: Heap on socket 0 was expanded by 10MB 00:04:34.016 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.016 EAL: request: mp_malloc_sync 00:04:34.016 EAL: No shared files mode enabled, IPC is disabled 00:04:34.016 EAL: Heap on socket 0 was shrunk by 10MB 00:04:34.016 EAL: Trying to obtain current memory policy. 00:04:34.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.016 EAL: Restoring previous memory policy: 4 00:04:34.016 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.016 EAL: request: mp_malloc_sync 00:04:34.016 EAL: No shared files mode enabled, IPC is disabled 00:04:34.016 EAL: Heap on socket 0 was expanded by 18MB 00:04:34.016 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.016 EAL: request: mp_malloc_sync 00:04:34.016 EAL: No shared files mode enabled, IPC is disabled 00:04:34.016 EAL: Heap on socket 0 was shrunk by 18MB 00:04:34.016 EAL: Trying to obtain current memory policy. 00:04:34.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.016 EAL: Restoring previous memory policy: 4 00:04:34.016 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.016 EAL: request: mp_malloc_sync 00:04:34.016 EAL: No shared files mode enabled, IPC is disabled 00:04:34.016 EAL: Heap on socket 0 was expanded by 34MB 00:04:34.016 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.016 EAL: request: mp_malloc_sync 00:04:34.016 EAL: No shared files mode enabled, IPC is disabled 00:04:34.016 EAL: Heap on socket 0 was shrunk by 34MB 00:04:34.016 EAL: Trying to obtain current memory policy. 00:04:34.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.016 EAL: Restoring previous memory policy: 4 00:04:34.016 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.016 EAL: request: mp_malloc_sync 00:04:34.016 EAL: No shared files mode enabled, IPC is disabled 00:04:34.016 EAL: Heap on socket 0 was expanded by 66MB 00:04:34.016 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.016 EAL: request: mp_malloc_sync 00:04:34.016 EAL: No shared files mode enabled, IPC is disabled 00:04:34.016 EAL: Heap on socket 0 was shrunk by 66MB 00:04:34.016 EAL: Trying to obtain current memory policy. 00:04:34.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.016 EAL: Restoring previous memory policy: 4 00:04:34.016 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.016 EAL: request: mp_malloc_sync 00:04:34.016 EAL: No shared files mode enabled, IPC is disabled 00:04:34.016 EAL: Heap on socket 0 was expanded by 130MB 00:04:34.016 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.016 EAL: request: mp_malloc_sync 00:04:34.016 EAL: No shared files mode enabled, IPC is disabled 00:04:34.016 EAL: Heap on socket 0 was shrunk by 130MB 00:04:34.016 EAL: Trying to obtain current memory policy. 00:04:34.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.346 EAL: Restoring previous memory policy: 4 00:04:34.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.346 EAL: request: mp_malloc_sync 00:04:34.346 EAL: No shared files mode enabled, IPC is disabled 00:04:34.346 EAL: Heap on socket 0 was expanded by 258MB 00:04:34.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.346 EAL: request: mp_malloc_sync 00:04:34.346 EAL: No shared files mode enabled, IPC is disabled 00:04:34.346 EAL: Heap on socket 0 was shrunk by 258MB 00:04:34.346 EAL: Trying to obtain current memory policy. 00:04:34.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.346 EAL: Restoring previous memory policy: 4 00:04:34.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.346 EAL: request: mp_malloc_sync 00:04:34.346 EAL: No shared files mode enabled, IPC is disabled 00:04:34.346 EAL: Heap on socket 0 was expanded by 514MB 00:04:34.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.614 EAL: request: mp_malloc_sync 00:04:34.614 EAL: No shared files mode enabled, IPC is disabled 00:04:34.614 EAL: Heap on socket 0 was shrunk by 514MB 00:04:34.614 EAL: Trying to obtain current memory policy. 00:04:34.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.614 EAL: Restoring previous memory policy: 4 00:04:34.614 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.614 EAL: request: mp_malloc_sync 00:04:34.614 EAL: No shared files mode enabled, IPC is disabled 00:04:34.614 EAL: Heap on socket 0 was expanded by 1026MB 00:04:34.873 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.873 EAL: request: mp_malloc_sync 00:04:34.873 EAL: No shared files mode enabled, IPC is disabled 00:04:34.873 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:34.873 passed 00:04:34.873 00:04:34.873 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.873 suites 1 1 n/a 0 0 00:04:34.873 tests 2 2 2 0 0 00:04:34.873 asserts 497 497 497 0 n/a 00:04:34.873 00:04:34.873 Elapsed time = 0.969 seconds 00:04:35.132 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.132 EAL: request: mp_malloc_sync 00:04:35.132 EAL: No shared files mode enabled, IPC is disabled 00:04:35.132 EAL: Heap on socket 0 was shrunk by 2MB 00:04:35.132 EAL: No shared files mode enabled, IPC is disabled 00:04:35.132 EAL: No shared files mode enabled, IPC is disabled 00:04:35.132 EAL: No shared files mode enabled, IPC is disabled 00:04:35.132 00:04:35.132 real 0m1.089s 00:04:35.132 user 0m0.638s 00:04:35.132 sys 0m0.412s 00:04:35.132 19:40:26 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.132 19:40:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:35.132 ************************************ 00:04:35.132 END TEST env_vtophys 00:04:35.132 ************************************ 00:04:35.132 19:40:26 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:35.132 19:40:26 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.132 19:40:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.132 19:40:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.132 ************************************ 00:04:35.132 START TEST env_pci 00:04:35.132 ************************************ 00:04:35.132 19:40:26 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:35.132 00:04:35.132 00:04:35.132 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.132 http://cunit.sourceforge.net/ 00:04:35.132 00:04:35.132 00:04:35.132 Suite: pci 00:04:35.132 Test: pci_hook ...[2024-07-24 19:40:26.559687] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1868839 has claimed it 00:04:35.132 EAL: Cannot find device (10000:00:01.0) 00:04:35.132 EAL: Failed to attach device on primary process 00:04:35.132 passed 00:04:35.132 00:04:35.132 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.132 suites 1 1 n/a 0 0 00:04:35.132 tests 1 1 1 0 0 00:04:35.132 asserts 25 25 25 0 n/a 00:04:35.132 00:04:35.132 Elapsed time = 0.028 seconds 00:04:35.132 00:04:35.132 real 0m0.048s 00:04:35.132 user 0m0.013s 00:04:35.132 sys 0m0.035s 00:04:35.132 19:40:26 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.132 19:40:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:35.132 ************************************ 00:04:35.132 END TEST env_pci 00:04:35.132 ************************************ 00:04:35.132 19:40:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:35.132 19:40:26 env -- env/env.sh@15 -- # uname 00:04:35.132 19:40:26 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:35.132 19:40:26 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:35.132 19:40:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:35.132 19:40:26 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:35.132 19:40:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.132 19:40:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.132 ************************************ 00:04:35.132 START TEST env_dpdk_post_init 00:04:35.132 ************************************ 00:04:35.132 19:40:26 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:35.132 EAL: Detected CPU lcores: 96 00:04:35.132 EAL: Detected NUMA nodes: 2 00:04:35.133 EAL: Detected shared linkage of DPDK 00:04:35.133 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:35.133 EAL: Selected IOVA mode 'VA' 00:04:35.133 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.133 EAL: VFIO support initialized 00:04:35.133 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:35.392 EAL: Using IOMMU type 1 (Type 1) 00:04:35.392 EAL: Ignore mapping IO port bar(1) 00:04:35.392 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:35.392 EAL: Ignore mapping IO port bar(1) 00:04:35.392 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:35.392 EAL: Ignore mapping IO port bar(1) 00:04:35.392 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:35.392 EAL: Ignore mapping IO port bar(1) 00:04:35.392 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:35.392 EAL: Ignore mapping IO port bar(1) 00:04:35.392 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:35.392 EAL: Ignore mapping IO port bar(1) 00:04:35.392 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:35.392 EAL: Ignore mapping IO port bar(1) 00:04:35.392 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:35.392 EAL: Ignore mapping IO port bar(1) 00:04:35.392 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:36.329 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:36.329 EAL: Ignore mapping IO port bar(1) 00:04:36.329 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:36.329 EAL: Ignore mapping IO port bar(1) 00:04:36.329 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:36.329 EAL: Ignore mapping IO port bar(1) 00:04:36.329 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:36.329 EAL: Ignore mapping IO port bar(1) 00:04:36.329 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:36.329 EAL: Ignore mapping IO port bar(1) 00:04:36.329 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:36.329 EAL: Ignore mapping IO port bar(1) 00:04:36.329 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:36.329 EAL: Ignore mapping IO port bar(1) 00:04:36.329 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:36.329 EAL: Ignore mapping IO port bar(1) 00:04:36.329 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:39.616 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:39.616 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:39.616 Starting DPDK initialization... 00:04:39.616 Starting SPDK post initialization... 00:04:39.616 SPDK NVMe probe 00:04:39.616 Attaching to 0000:5e:00.0 00:04:39.616 Attached to 0000:5e:00.0 00:04:39.616 Cleaning up... 00:04:39.616 00:04:39.617 real 0m4.351s 00:04:39.617 user 0m3.301s 00:04:39.617 sys 0m0.123s 00:04:39.617 19:40:31 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.617 19:40:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.617 ************************************ 00:04:39.617 END TEST env_dpdk_post_init 00:04:39.617 ************************************ 00:04:39.617 19:40:31 env -- env/env.sh@26 -- # uname 00:04:39.617 19:40:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:39.617 19:40:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:39.617 19:40:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.617 19:40:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.617 19:40:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.617 ************************************ 00:04:39.617 START TEST env_mem_callbacks 00:04:39.617 ************************************ 00:04:39.617 19:40:31 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:39.617 EAL: Detected CPU lcores: 96 00:04:39.617 EAL: Detected NUMA nodes: 2 00:04:39.617 EAL: Detected shared linkage of DPDK 00:04:39.617 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:39.617 EAL: Selected IOVA mode 'VA' 00:04:39.617 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.617 EAL: VFIO support initialized 00:04:39.617 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:39.617 00:04:39.617 00:04:39.617 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.617 http://cunit.sourceforge.net/ 00:04:39.617 00:04:39.617 00:04:39.617 Suite: memory 00:04:39.617 Test: test ... 00:04:39.617 register 0x200000200000 2097152 00:04:39.617 malloc 3145728 00:04:39.617 register 0x200000400000 4194304 00:04:39.617 buf 0x200000500000 len 3145728 PASSED 00:04:39.617 malloc 64 00:04:39.617 buf 0x2000004fff40 len 64 PASSED 00:04:39.617 malloc 4194304 00:04:39.617 register 0x200000800000 6291456 00:04:39.617 buf 0x200000a00000 len 4194304 PASSED 00:04:39.617 free 0x200000500000 3145728 00:04:39.617 free 0x2000004fff40 64 00:04:39.617 unregister 0x200000400000 4194304 PASSED 00:04:39.617 free 0x200000a00000 4194304 00:04:39.617 unregister 0x200000800000 6291456 PASSED 00:04:39.617 malloc 8388608 00:04:39.617 register 0x200000400000 10485760 00:04:39.617 buf 0x200000600000 len 8388608 PASSED 00:04:39.617 free 0x200000600000 8388608 00:04:39.617 unregister 0x200000400000 10485760 PASSED 00:04:39.617 passed 00:04:39.617 00:04:39.617 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.617 suites 1 1 n/a 0 0 00:04:39.617 tests 1 1 1 0 0 00:04:39.617 asserts 15 15 15 0 n/a 00:04:39.617 00:04:39.617 Elapsed time = 0.005 seconds 00:04:39.617 00:04:39.617 real 0m0.049s 00:04:39.617 user 0m0.015s 00:04:39.617 sys 0m0.034s 00:04:39.617 19:40:31 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.617 19:40:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:39.617 ************************************ 00:04:39.617 END TEST env_mem_callbacks 00:04:39.617 ************************************ 00:04:39.617 00:04:39.617 real 0m6.085s 00:04:39.617 user 0m4.250s 00:04:39.617 sys 0m0.895s 00:04:39.617 19:40:31 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.617 19:40:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.617 ************************************ 00:04:39.617 END TEST env 00:04:39.617 ************************************ 00:04:39.617 19:40:31 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:39.617 19:40:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.617 19:40:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.617 19:40:31 -- common/autotest_common.sh@10 -- # set +x 00:04:39.877 ************************************ 00:04:39.877 START TEST rpc 00:04:39.877 ************************************ 00:04:39.877 19:40:31 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:39.877 * Looking for test storage... 00:04:39.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.877 19:40:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1869660 00:04:39.877 19:40:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.877 19:40:31 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:39.877 19:40:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1869660 00:04:39.877 19:40:31 rpc -- common/autotest_common.sh@831 -- # '[' -z 1869660 ']' 00:04:39.877 19:40:31 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.877 19:40:31 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.877 19:40:31 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.877 19:40:31 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.877 19:40:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.877 [2024-07-24 19:40:31.373252] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:04:39.877 [2024-07-24 19:40:31.373297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869660 ] 00:04:39.877 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.877 [2024-07-24 19:40:31.428185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.136 [2024-07-24 19:40:31.503946] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:40.136 [2024-07-24 19:40:31.503983] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1869660' to capture a snapshot of events at runtime. 00:04:40.136 [2024-07-24 19:40:31.503990] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:40.136 [2024-07-24 19:40:31.503996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:40.136 [2024-07-24 19:40:31.504001] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1869660 for offline analysis/debug. 00:04:40.136 [2024-07-24 19:40:31.504020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.703 19:40:32 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.703 19:40:32 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:40.703 19:40:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:40.703 19:40:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:40.703 19:40:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:40.703 19:40:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:40.703 19:40:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.703 19:40:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.703 19:40:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.703 ************************************ 00:04:40.703 START TEST rpc_integrity 00:04:40.703 ************************************ 00:04:40.703 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:40.703 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:40.703 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.703 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.703 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.703 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:40.703 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:40.703 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:40.703 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:40.703 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.703 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.703 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.703 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:40.703 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:40.703 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.703 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.703 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.703 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:40.703 { 00:04:40.703 "name": "Malloc0", 00:04:40.703 "aliases": [ 00:04:40.703 "408136f9-63a1-4445-a6c8-45eded2711d1" 00:04:40.703 ], 00:04:40.703 "product_name": "Malloc disk", 00:04:40.703 "block_size": 512, 00:04:40.703 "num_blocks": 16384, 00:04:40.703 "uuid": "408136f9-63a1-4445-a6c8-45eded2711d1", 00:04:40.703 "assigned_rate_limits": { 00:04:40.703 "rw_ios_per_sec": 0, 00:04:40.703 "rw_mbytes_per_sec": 0, 00:04:40.703 "r_mbytes_per_sec": 0, 00:04:40.703 "w_mbytes_per_sec": 0 00:04:40.703 }, 00:04:40.703 "claimed": false, 00:04:40.703 "zoned": false, 00:04:40.703 "supported_io_types": { 00:04:40.703 "read": true, 00:04:40.703 "write": true, 00:04:40.703 "unmap": true, 00:04:40.703 "flush": true, 00:04:40.703 "reset": true, 00:04:40.703 "nvme_admin": false, 00:04:40.703 "nvme_io": false, 00:04:40.703 "nvme_io_md": false, 00:04:40.703 "write_zeroes": true, 00:04:40.703 "zcopy": true, 00:04:40.703 "get_zone_info": false, 00:04:40.703 "zone_management": false, 00:04:40.703 "zone_append": false, 00:04:40.703 "compare": false, 00:04:40.703 "compare_and_write": false, 00:04:40.703 "abort": true, 00:04:40.703 "seek_hole": false, 00:04:40.703 "seek_data": false, 00:04:40.703 "copy": true, 00:04:40.703 "nvme_iov_md": false 00:04:40.703 }, 00:04:40.703 "memory_domains": [ 00:04:40.703 { 00:04:40.703 "dma_device_id": "system", 00:04:40.703 "dma_device_type": 1 00:04:40.703 }, 00:04:40.703 { 00:04:40.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.703 "dma_device_type": 2 00:04:40.703 } 00:04:40.703 ], 00:04:40.703 "driver_specific": {} 00:04:40.703 } 00:04:40.703 ]' 00:04:40.703 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:40.963 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:40.963 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.963 [2024-07-24 19:40:32.327228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:40.963 [2024-07-24 19:40:32.327259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:40.963 [2024-07-24 19:40:32.327271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19592d0 00:04:40.963 [2024-07-24 19:40:32.327278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:40.963 [2024-07-24 19:40:32.328369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:40.963 [2024-07-24 19:40:32.328391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:40.963 Passthru0 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.963 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.963 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.963 { 00:04:40.963 "name": "Malloc0", 00:04:40.963 "aliases": [ 00:04:40.963 "408136f9-63a1-4445-a6c8-45eded2711d1" 00:04:40.963 ], 00:04:40.963 "product_name": "Malloc disk", 00:04:40.963 "block_size": 512, 00:04:40.963 "num_blocks": 16384, 00:04:40.963 "uuid": "408136f9-63a1-4445-a6c8-45eded2711d1", 00:04:40.963 "assigned_rate_limits": { 00:04:40.963 "rw_ios_per_sec": 0, 00:04:40.963 "rw_mbytes_per_sec": 0, 00:04:40.963 "r_mbytes_per_sec": 0, 00:04:40.963 "w_mbytes_per_sec": 0 00:04:40.963 }, 00:04:40.963 "claimed": true, 00:04:40.963 "claim_type": "exclusive_write", 00:04:40.963 "zoned": false, 00:04:40.963 "supported_io_types": { 00:04:40.963 "read": true, 00:04:40.963 "write": true, 00:04:40.963 "unmap": true, 00:04:40.963 "flush": true, 00:04:40.963 "reset": true, 00:04:40.963 "nvme_admin": false, 00:04:40.963 "nvme_io": false, 00:04:40.963 "nvme_io_md": false, 00:04:40.963 "write_zeroes": true, 00:04:40.963 "zcopy": true, 00:04:40.963 "get_zone_info": false, 00:04:40.963 "zone_management": false, 00:04:40.963 "zone_append": false, 00:04:40.963 "compare": false, 00:04:40.963 "compare_and_write": false, 00:04:40.963 "abort": true, 00:04:40.963 "seek_hole": false, 00:04:40.963 "seek_data": false, 00:04:40.963 "copy": true, 00:04:40.963 "nvme_iov_md": false 00:04:40.963 }, 00:04:40.963 "memory_domains": [ 00:04:40.963 { 00:04:40.963 "dma_device_id": "system", 00:04:40.963 "dma_device_type": 1 00:04:40.963 }, 00:04:40.963 { 00:04:40.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.963 "dma_device_type": 2 00:04:40.963 } 00:04:40.963 ], 00:04:40.963 "driver_specific": {} 00:04:40.963 }, 00:04:40.963 { 00:04:40.963 "name": "Passthru0", 00:04:40.963 "aliases": [ 00:04:40.963 "65ac431f-8027-5196-8cf9-1af9722d13fc" 00:04:40.963 ], 00:04:40.963 "product_name": "passthru", 00:04:40.963 "block_size": 512, 00:04:40.963 "num_blocks": 16384, 00:04:40.963 "uuid": "65ac431f-8027-5196-8cf9-1af9722d13fc", 00:04:40.963 "assigned_rate_limits": { 00:04:40.963 "rw_ios_per_sec": 0, 00:04:40.963 "rw_mbytes_per_sec": 0, 00:04:40.963 "r_mbytes_per_sec": 0, 00:04:40.963 "w_mbytes_per_sec": 0 00:04:40.963 }, 00:04:40.963 "claimed": false, 00:04:40.963 "zoned": false, 00:04:40.963 "supported_io_types": { 00:04:40.963 "read": true, 00:04:40.963 "write": true, 00:04:40.963 "unmap": true, 00:04:40.963 "flush": true, 00:04:40.963 "reset": true, 00:04:40.963 "nvme_admin": false, 00:04:40.963 "nvme_io": false, 00:04:40.963 "nvme_io_md": false, 00:04:40.963 "write_zeroes": true, 00:04:40.963 "zcopy": true, 00:04:40.963 "get_zone_info": false, 00:04:40.963 "zone_management": false, 00:04:40.963 "zone_append": false, 00:04:40.963 "compare": false, 00:04:40.963 "compare_and_write": false, 00:04:40.963 "abort": true, 00:04:40.963 "seek_hole": false, 00:04:40.963 "seek_data": false, 00:04:40.963 "copy": true, 00:04:40.963 "nvme_iov_md": false 00:04:40.963 }, 00:04:40.963 "memory_domains": [ 00:04:40.963 { 00:04:40.963 "dma_device_id": "system", 00:04:40.963 "dma_device_type": 1 00:04:40.963 }, 00:04:40.963 { 00:04:40.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.963 "dma_device_type": 2 00:04:40.963 } 00:04:40.963 ], 00:04:40.963 "driver_specific": { 00:04:40.963 "passthru": { 00:04:40.963 "name": "Passthru0", 00:04:40.963 "base_bdev_name": "Malloc0" 00:04:40.963 } 00:04:40.963 } 00:04:40.963 } 00:04:40.963 ]' 00:04:40.963 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.963 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.963 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.963 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.963 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.963 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.963 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.963 19:40:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.963 00:04:40.963 real 0m0.278s 00:04:40.963 user 0m0.171s 00:04:40.963 sys 0m0.037s 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.963 19:40:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.964 ************************************ 00:04:40.964 END TEST rpc_integrity 00:04:40.964 ************************************ 00:04:40.964 19:40:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:40.964 19:40:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.964 19:40:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.964 19:40:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.964 ************************************ 00:04:40.964 START TEST rpc_plugins 00:04:40.964 ************************************ 00:04:40.964 19:40:32 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:40.964 19:40:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:40.964 19:40:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.964 19:40:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.964 19:40:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.964 19:40:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:40.964 19:40:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:40.964 19:40:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.964 19:40:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.223 19:40:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.223 19:40:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:41.223 { 00:04:41.223 "name": "Malloc1", 00:04:41.223 "aliases": [ 00:04:41.223 "ad06be26-916e-4ced-bdad-670258f36eea" 00:04:41.223 ], 00:04:41.223 "product_name": "Malloc disk", 00:04:41.223 "block_size": 4096, 00:04:41.223 "num_blocks": 256, 00:04:41.223 "uuid": "ad06be26-916e-4ced-bdad-670258f36eea", 00:04:41.223 "assigned_rate_limits": { 00:04:41.223 "rw_ios_per_sec": 0, 00:04:41.223 "rw_mbytes_per_sec": 0, 00:04:41.223 "r_mbytes_per_sec": 0, 00:04:41.223 "w_mbytes_per_sec": 0 00:04:41.223 }, 00:04:41.223 "claimed": false, 00:04:41.223 "zoned": false, 00:04:41.223 "supported_io_types": { 00:04:41.223 "read": true, 00:04:41.223 "write": true, 00:04:41.223 "unmap": true, 00:04:41.223 "flush": true, 00:04:41.223 "reset": true, 00:04:41.223 "nvme_admin": false, 00:04:41.223 "nvme_io": false, 00:04:41.223 "nvme_io_md": false, 00:04:41.223 "write_zeroes": true, 00:04:41.223 "zcopy": true, 00:04:41.223 "get_zone_info": false, 00:04:41.223 "zone_management": false, 00:04:41.223 "zone_append": false, 00:04:41.223 "compare": false, 00:04:41.223 "compare_and_write": false, 00:04:41.223 "abort": true, 00:04:41.223 "seek_hole": false, 00:04:41.223 "seek_data": false, 00:04:41.223 "copy": true, 00:04:41.223 "nvme_iov_md": false 00:04:41.223 }, 00:04:41.223 "memory_domains": [ 00:04:41.223 { 00:04:41.223 "dma_device_id": "system", 00:04:41.223 "dma_device_type": 1 00:04:41.223 }, 00:04:41.223 { 00:04:41.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.223 "dma_device_type": 2 00:04:41.223 } 00:04:41.223 ], 00:04:41.223 "driver_specific": {} 00:04:41.223 } 00:04:41.223 ]' 00:04:41.223 19:40:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:41.223 19:40:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:41.223 19:40:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:41.223 19:40:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.223 19:40:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.223 19:40:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.223 19:40:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:41.223 19:40:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.223 19:40:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.223 19:40:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.223 19:40:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:41.223 19:40:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:41.223 19:40:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:41.223 00:04:41.223 real 0m0.135s 00:04:41.223 user 0m0.090s 00:04:41.223 sys 0m0.012s 00:04:41.223 19:40:32 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.223 19:40:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.223 ************************************ 00:04:41.223 END TEST rpc_plugins 00:04:41.223 ************************************ 00:04:41.223 19:40:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:41.223 19:40:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.223 19:40:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.223 19:40:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.223 ************************************ 00:04:41.223 START TEST rpc_trace_cmd_test 00:04:41.223 ************************************ 00:04:41.223 19:40:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:41.223 19:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:41.223 19:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:41.223 19:40:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.223 19:40:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:41.223 19:40:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.223 19:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:41.223 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1869660", 00:04:41.223 "tpoint_group_mask": "0x8", 00:04:41.223 "iscsi_conn": { 00:04:41.223 "mask": "0x2", 00:04:41.223 "tpoint_mask": "0x0" 00:04:41.223 }, 00:04:41.223 "scsi": { 00:04:41.223 "mask": "0x4", 00:04:41.223 "tpoint_mask": "0x0" 00:04:41.223 }, 00:04:41.223 "bdev": { 00:04:41.223 "mask": "0x8", 00:04:41.223 "tpoint_mask": "0xffffffffffffffff" 00:04:41.223 }, 00:04:41.223 "nvmf_rdma": { 00:04:41.223 "mask": "0x10", 00:04:41.223 "tpoint_mask": "0x0" 00:04:41.223 }, 00:04:41.223 "nvmf_tcp": { 00:04:41.223 "mask": "0x20", 00:04:41.223 "tpoint_mask": "0x0" 00:04:41.223 }, 00:04:41.223 "ftl": { 00:04:41.223 "mask": "0x40", 00:04:41.223 "tpoint_mask": "0x0" 00:04:41.223 }, 00:04:41.223 "blobfs": { 00:04:41.223 "mask": "0x80", 00:04:41.223 "tpoint_mask": "0x0" 00:04:41.223 }, 00:04:41.223 "dsa": { 00:04:41.223 "mask": "0x200", 00:04:41.223 "tpoint_mask": "0x0" 00:04:41.223 }, 00:04:41.223 "thread": { 00:04:41.223 "mask": "0x400", 00:04:41.223 "tpoint_mask": "0x0" 00:04:41.223 }, 00:04:41.223 "nvme_pcie": { 00:04:41.223 "mask": "0x800", 00:04:41.223 "tpoint_mask": "0x0" 00:04:41.223 }, 00:04:41.223 "iaa": { 00:04:41.223 "mask": "0x1000", 00:04:41.223 "tpoint_mask": "0x0" 00:04:41.223 }, 00:04:41.223 "nvme_tcp": { 00:04:41.223 "mask": "0x2000", 00:04:41.223 "tpoint_mask": "0x0" 00:04:41.223 }, 00:04:41.223 "bdev_nvme": { 00:04:41.223 "mask": "0x4000", 00:04:41.223 "tpoint_mask": "0x0" 00:04:41.223 }, 00:04:41.223 "sock": { 00:04:41.223 "mask": "0x8000", 00:04:41.223 "tpoint_mask": "0x0" 00:04:41.223 } 00:04:41.223 }' 00:04:41.223 19:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:41.223 19:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:41.223 19:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:41.482 19:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:41.482 19:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:41.482 19:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:41.482 19:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:41.482 19:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:41.482 19:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:41.482 19:40:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:41.482 00:04:41.482 real 0m0.212s 00:04:41.482 user 0m0.184s 00:04:41.482 sys 0m0.020s 00:04:41.482 19:40:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.482 19:40:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:41.482 ************************************ 00:04:41.482 END TEST rpc_trace_cmd_test 00:04:41.482 ************************************ 00:04:41.482 19:40:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:41.482 19:40:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:41.482 19:40:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:41.482 19:40:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.482 19:40:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.482 19:40:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.482 ************************************ 00:04:41.482 START TEST rpc_daemon_integrity 00:04:41.482 ************************************ 00:04:41.482 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:41.483 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:41.483 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.483 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.483 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.483 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:41.483 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:41.483 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:41.483 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:41.483 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.483 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.741 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.741 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:41.741 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:41.741 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.741 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.741 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.741 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:41.741 { 00:04:41.741 "name": "Malloc2", 00:04:41.741 "aliases": [ 00:04:41.741 "8aad2df5-9b1c-45f0-aaee-d1756e83d9cf" 00:04:41.741 ], 00:04:41.741 "product_name": "Malloc disk", 00:04:41.741 "block_size": 512, 00:04:41.741 "num_blocks": 16384, 00:04:41.741 "uuid": "8aad2df5-9b1c-45f0-aaee-d1756e83d9cf", 00:04:41.741 "assigned_rate_limits": { 00:04:41.741 "rw_ios_per_sec": 0, 00:04:41.741 "rw_mbytes_per_sec": 0, 00:04:41.741 "r_mbytes_per_sec": 0, 00:04:41.741 "w_mbytes_per_sec": 0 00:04:41.741 }, 00:04:41.741 "claimed": false, 00:04:41.741 "zoned": false, 00:04:41.741 "supported_io_types": { 00:04:41.741 "read": true, 00:04:41.741 "write": true, 00:04:41.741 "unmap": true, 00:04:41.741 "flush": true, 00:04:41.741 "reset": true, 00:04:41.741 "nvme_admin": false, 00:04:41.741 "nvme_io": false, 00:04:41.741 "nvme_io_md": false, 00:04:41.741 "write_zeroes": true, 00:04:41.741 "zcopy": true, 00:04:41.741 "get_zone_info": false, 00:04:41.741 "zone_management": false, 00:04:41.741 "zone_append": false, 00:04:41.741 "compare": false, 00:04:41.741 "compare_and_write": false, 00:04:41.741 "abort": true, 00:04:41.741 "seek_hole": false, 00:04:41.741 "seek_data": false, 00:04:41.741 "copy": true, 00:04:41.741 "nvme_iov_md": false 00:04:41.741 }, 00:04:41.741 "memory_domains": [ 00:04:41.741 { 00:04:41.741 "dma_device_id": "system", 00:04:41.742 "dma_device_type": 1 00:04:41.742 }, 00:04:41.742 { 00:04:41.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.742 "dma_device_type": 2 00:04:41.742 } 00:04:41.742 ], 00:04:41.742 "driver_specific": {} 00:04:41.742 } 00:04:41.742 ]' 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.742 [2024-07-24 19:40:33.145461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:41.742 [2024-07-24 19:40:33.145488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:41.742 [2024-07-24 19:40:33.145500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1af2360 00:04:41.742 [2024-07-24 19:40:33.145507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:41.742 [2024-07-24 19:40:33.146456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:41.742 [2024-07-24 19:40:33.146475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:41.742 Passthru0 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:41.742 { 00:04:41.742 "name": "Malloc2", 00:04:41.742 "aliases": [ 00:04:41.742 "8aad2df5-9b1c-45f0-aaee-d1756e83d9cf" 00:04:41.742 ], 00:04:41.742 "product_name": "Malloc disk", 00:04:41.742 "block_size": 512, 00:04:41.742 "num_blocks": 16384, 00:04:41.742 "uuid": "8aad2df5-9b1c-45f0-aaee-d1756e83d9cf", 00:04:41.742 "assigned_rate_limits": { 00:04:41.742 "rw_ios_per_sec": 0, 00:04:41.742 "rw_mbytes_per_sec": 0, 00:04:41.742 "r_mbytes_per_sec": 0, 00:04:41.742 "w_mbytes_per_sec": 0 00:04:41.742 }, 00:04:41.742 "claimed": true, 00:04:41.742 "claim_type": "exclusive_write", 00:04:41.742 "zoned": false, 00:04:41.742 "supported_io_types": { 00:04:41.742 "read": true, 00:04:41.742 "write": true, 00:04:41.742 "unmap": true, 00:04:41.742 "flush": true, 00:04:41.742 "reset": true, 00:04:41.742 "nvme_admin": false, 00:04:41.742 "nvme_io": false, 00:04:41.742 "nvme_io_md": false, 00:04:41.742 "write_zeroes": true, 00:04:41.742 "zcopy": true, 00:04:41.742 "get_zone_info": false, 00:04:41.742 "zone_management": false, 00:04:41.742 "zone_append": false, 00:04:41.742 "compare": false, 00:04:41.742 "compare_and_write": false, 00:04:41.742 "abort": true, 00:04:41.742 "seek_hole": false, 00:04:41.742 "seek_data": false, 00:04:41.742 "copy": true, 00:04:41.742 "nvme_iov_md": false 00:04:41.742 }, 00:04:41.742 "memory_domains": [ 00:04:41.742 { 00:04:41.742 "dma_device_id": "system", 00:04:41.742 "dma_device_type": 1 00:04:41.742 }, 00:04:41.742 { 00:04:41.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.742 "dma_device_type": 2 00:04:41.742 } 00:04:41.742 ], 00:04:41.742 "driver_specific": {} 00:04:41.742 }, 00:04:41.742 { 00:04:41.742 "name": "Passthru0", 00:04:41.742 "aliases": [ 00:04:41.742 "4311cce4-c27d-5992-8fdf-f3d365f2ac7c" 00:04:41.742 ], 00:04:41.742 "product_name": "passthru", 00:04:41.742 "block_size": 512, 00:04:41.742 "num_blocks": 16384, 00:04:41.742 "uuid": "4311cce4-c27d-5992-8fdf-f3d365f2ac7c", 00:04:41.742 "assigned_rate_limits": { 00:04:41.742 "rw_ios_per_sec": 0, 00:04:41.742 "rw_mbytes_per_sec": 0, 00:04:41.742 "r_mbytes_per_sec": 0, 00:04:41.742 "w_mbytes_per_sec": 0 00:04:41.742 }, 00:04:41.742 "claimed": false, 00:04:41.742 "zoned": false, 00:04:41.742 "supported_io_types": { 00:04:41.742 "read": true, 00:04:41.742 "write": true, 00:04:41.742 "unmap": true, 00:04:41.742 "flush": true, 00:04:41.742 "reset": true, 00:04:41.742 "nvme_admin": false, 00:04:41.742 "nvme_io": false, 00:04:41.742 "nvme_io_md": false, 00:04:41.742 "write_zeroes": true, 00:04:41.742 "zcopy": true, 00:04:41.742 "get_zone_info": false, 00:04:41.742 "zone_management": false, 00:04:41.742 "zone_append": false, 00:04:41.742 "compare": false, 00:04:41.742 "compare_and_write": false, 00:04:41.742 "abort": true, 00:04:41.742 "seek_hole": false, 00:04:41.742 "seek_data": false, 00:04:41.742 "copy": true, 00:04:41.742 "nvme_iov_md": false 00:04:41.742 }, 00:04:41.742 "memory_domains": [ 00:04:41.742 { 00:04:41.742 "dma_device_id": "system", 00:04:41.742 "dma_device_type": 1 00:04:41.742 }, 00:04:41.742 { 00:04:41.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.742 "dma_device_type": 2 00:04:41.742 } 00:04:41.742 ], 00:04:41.742 "driver_specific": { 00:04:41.742 "passthru": { 00:04:41.742 "name": "Passthru0", 00:04:41.742 "base_bdev_name": "Malloc2" 00:04:41.742 } 00:04:41.742 } 00:04:41.742 } 00:04:41.742 ]' 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:41.742 00:04:41.742 real 0m0.256s 00:04:41.742 user 0m0.163s 00:04:41.742 sys 0m0.032s 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.742 19:40:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.742 ************************************ 00:04:41.742 END TEST rpc_daemon_integrity 00:04:41.742 ************************************ 00:04:41.742 19:40:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:41.742 19:40:33 rpc -- rpc/rpc.sh@84 -- # killprocess 1869660 00:04:41.742 19:40:33 rpc -- common/autotest_common.sh@950 -- # '[' -z 1869660 ']' 00:04:41.742 19:40:33 rpc -- common/autotest_common.sh@954 -- # kill -0 1869660 00:04:41.742 19:40:33 rpc -- common/autotest_common.sh@955 -- # uname 00:04:41.742 19:40:33 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:41.742 19:40:33 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1869660 00:04:42.001 19:40:33 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.001 19:40:33 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.001 19:40:33 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1869660' 00:04:42.001 killing process with pid 1869660 00:04:42.001 19:40:33 rpc -- common/autotest_common.sh@969 -- # kill 1869660 00:04:42.001 19:40:33 rpc -- common/autotest_common.sh@974 -- # wait 1869660 00:04:42.259 00:04:42.259 real 0m2.428s 00:04:42.259 user 0m3.124s 00:04:42.259 sys 0m0.657s 00:04:42.259 19:40:33 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.259 19:40:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.259 ************************************ 00:04:42.259 END TEST rpc 00:04:42.259 ************************************ 00:04:42.259 19:40:33 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:42.259 19:40:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.259 19:40:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.259 19:40:33 -- common/autotest_common.sh@10 -- # set +x 00:04:42.259 ************************************ 00:04:42.259 START TEST skip_rpc 00:04:42.259 ************************************ 00:04:42.259 19:40:33 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:42.259 * Looking for test storage... 00:04:42.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:42.259 19:40:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:42.259 19:40:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:42.259 19:40:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:42.259 19:40:33 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.259 19:40:33 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.259 19:40:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.259 ************************************ 00:04:42.259 START TEST skip_rpc 00:04:42.259 ************************************ 00:04:42.259 19:40:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:42.259 19:40:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1870289 00:04:42.259 19:40:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.259 19:40:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:42.259 19:40:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:42.517 [2024-07-24 19:40:33.888943] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:04:42.517 [2024-07-24 19:40:33.888982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1870289 ] 00:04:42.517 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.517 [2024-07-24 19:40:33.940547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.517 [2024-07-24 19:40:34.015634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1870289 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1870289 ']' 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1870289 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1870289 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1870289' 00:04:47.786 killing process with pid 1870289 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1870289 00:04:47.786 19:40:38 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1870289 00:04:47.786 00:04:47.786 real 0m5.357s 00:04:47.786 user 0m5.133s 00:04:47.786 sys 0m0.248s 00:04:47.786 19:40:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.786 19:40:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.787 ************************************ 00:04:47.787 END TEST skip_rpc 00:04:47.787 ************************************ 00:04:47.787 19:40:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:47.787 19:40:39 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.787 19:40:39 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.787 19:40:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.787 ************************************ 00:04:47.787 START TEST skip_rpc_with_json 00:04:47.787 ************************************ 00:04:47.787 19:40:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:47.787 19:40:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:47.787 19:40:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1871241 00:04:47.787 19:40:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.787 19:40:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1871241 00:04:47.787 19:40:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1871241 ']' 00:04:47.787 19:40:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.787 19:40:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.787 19:40:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.787 19:40:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.787 19:40:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.787 19:40:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.787 [2024-07-24 19:40:39.310113] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:04:47.787 [2024-07-24 19:40:39.310155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1871241 ] 00:04:47.787 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.787 [2024-07-24 19:40:39.362521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.046 [2024-07-24 19:40:39.445054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.613 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.613 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:48.613 19:40:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:48.613 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.613 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.613 [2024-07-24 19:40:40.111813] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:48.613 request: 00:04:48.613 { 00:04:48.613 "trtype": "tcp", 00:04:48.613 "method": "nvmf_get_transports", 00:04:48.613 "req_id": 1 00:04:48.613 } 00:04:48.613 Got JSON-RPC error response 00:04:48.613 response: 00:04:48.613 { 00:04:48.613 "code": -19, 00:04:48.613 "message": "No such device" 00:04:48.613 } 00:04:48.613 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:48.613 19:40:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:48.613 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.613 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.613 [2024-07-24 19:40:40.119915] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.613 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.613 19:40:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:48.613 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.613 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.874 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.874 19:40:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:48.874 { 00:04:48.874 "subsystems": [ 00:04:48.874 { 00:04:48.874 "subsystem": "vfio_user_target", 00:04:48.874 "config": null 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "subsystem": "keyring", 00:04:48.874 "config": [] 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "subsystem": "iobuf", 00:04:48.874 "config": [ 00:04:48.874 { 00:04:48.874 "method": "iobuf_set_options", 00:04:48.874 "params": { 00:04:48.874 "small_pool_count": 8192, 00:04:48.874 "large_pool_count": 1024, 00:04:48.874 "small_bufsize": 8192, 00:04:48.874 "large_bufsize": 135168 00:04:48.874 } 00:04:48.874 } 00:04:48.874 ] 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "subsystem": "sock", 00:04:48.874 "config": [ 00:04:48.874 { 00:04:48.874 "method": "sock_set_default_impl", 00:04:48.874 "params": { 00:04:48.874 "impl_name": "posix" 00:04:48.874 } 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "method": "sock_impl_set_options", 00:04:48.874 "params": { 00:04:48.874 "impl_name": "ssl", 00:04:48.874 "recv_buf_size": 4096, 00:04:48.874 "send_buf_size": 4096, 00:04:48.874 "enable_recv_pipe": true, 00:04:48.874 "enable_quickack": false, 00:04:48.874 "enable_placement_id": 0, 00:04:48.874 "enable_zerocopy_send_server": true, 00:04:48.874 "enable_zerocopy_send_client": false, 00:04:48.874 "zerocopy_threshold": 0, 00:04:48.874 "tls_version": 0, 00:04:48.874 "enable_ktls": false 00:04:48.874 } 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "method": "sock_impl_set_options", 00:04:48.874 "params": { 00:04:48.874 "impl_name": "posix", 00:04:48.874 "recv_buf_size": 2097152, 00:04:48.874 "send_buf_size": 2097152, 00:04:48.874 "enable_recv_pipe": true, 00:04:48.874 "enable_quickack": false, 00:04:48.874 "enable_placement_id": 0, 00:04:48.874 "enable_zerocopy_send_server": true, 00:04:48.874 "enable_zerocopy_send_client": false, 00:04:48.874 "zerocopy_threshold": 0, 00:04:48.874 "tls_version": 0, 00:04:48.874 "enable_ktls": false 00:04:48.874 } 00:04:48.874 } 00:04:48.874 ] 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "subsystem": "vmd", 00:04:48.874 "config": [] 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "subsystem": "accel", 00:04:48.874 "config": [ 00:04:48.874 { 00:04:48.874 "method": "accel_set_options", 00:04:48.874 "params": { 00:04:48.874 "small_cache_size": 128, 00:04:48.874 "large_cache_size": 16, 00:04:48.874 "task_count": 2048, 00:04:48.874 "sequence_count": 2048, 00:04:48.874 "buf_count": 2048 00:04:48.874 } 00:04:48.874 } 00:04:48.874 ] 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "subsystem": "bdev", 00:04:48.874 "config": [ 00:04:48.874 { 00:04:48.874 "method": "bdev_set_options", 00:04:48.874 "params": { 00:04:48.874 "bdev_io_pool_size": 65535, 00:04:48.874 "bdev_io_cache_size": 256, 00:04:48.874 "bdev_auto_examine": true, 00:04:48.874 "iobuf_small_cache_size": 128, 00:04:48.874 "iobuf_large_cache_size": 16 00:04:48.874 } 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "method": "bdev_raid_set_options", 00:04:48.874 "params": { 00:04:48.874 "process_window_size_kb": 1024, 00:04:48.874 "process_max_bandwidth_mb_sec": 0 00:04:48.874 } 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "method": "bdev_iscsi_set_options", 00:04:48.874 "params": { 00:04:48.874 "timeout_sec": 30 00:04:48.874 } 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "method": "bdev_nvme_set_options", 00:04:48.874 "params": { 00:04:48.874 "action_on_timeout": "none", 00:04:48.874 "timeout_us": 0, 00:04:48.874 "timeout_admin_us": 0, 00:04:48.874 "keep_alive_timeout_ms": 10000, 00:04:48.874 "arbitration_burst": 0, 00:04:48.874 "low_priority_weight": 0, 00:04:48.874 "medium_priority_weight": 0, 00:04:48.874 "high_priority_weight": 0, 00:04:48.874 "nvme_adminq_poll_period_us": 10000, 00:04:48.874 "nvme_ioq_poll_period_us": 0, 00:04:48.874 "io_queue_requests": 0, 00:04:48.874 "delay_cmd_submit": true, 00:04:48.874 "transport_retry_count": 4, 00:04:48.874 "bdev_retry_count": 3, 00:04:48.874 "transport_ack_timeout": 0, 00:04:48.874 "ctrlr_loss_timeout_sec": 0, 00:04:48.874 "reconnect_delay_sec": 0, 00:04:48.874 "fast_io_fail_timeout_sec": 0, 00:04:48.874 "disable_auto_failback": false, 00:04:48.874 "generate_uuids": false, 00:04:48.874 "transport_tos": 0, 00:04:48.874 "nvme_error_stat": false, 00:04:48.874 "rdma_srq_size": 0, 00:04:48.874 "io_path_stat": false, 00:04:48.874 "allow_accel_sequence": false, 00:04:48.874 "rdma_max_cq_size": 0, 00:04:48.874 "rdma_cm_event_timeout_ms": 0, 00:04:48.874 "dhchap_digests": [ 00:04:48.874 "sha256", 00:04:48.874 "sha384", 00:04:48.874 "sha512" 00:04:48.874 ], 00:04:48.874 "dhchap_dhgroups": [ 00:04:48.874 "null", 00:04:48.874 "ffdhe2048", 00:04:48.874 "ffdhe3072", 00:04:48.874 "ffdhe4096", 00:04:48.874 "ffdhe6144", 00:04:48.874 "ffdhe8192" 00:04:48.874 ] 00:04:48.874 } 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "method": "bdev_nvme_set_hotplug", 00:04:48.874 "params": { 00:04:48.874 "period_us": 100000, 00:04:48.874 "enable": false 00:04:48.874 } 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "method": "bdev_wait_for_examine" 00:04:48.874 } 00:04:48.874 ] 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "subsystem": "scsi", 00:04:48.874 "config": null 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "subsystem": "scheduler", 00:04:48.874 "config": [ 00:04:48.874 { 00:04:48.874 "method": "framework_set_scheduler", 00:04:48.874 "params": { 00:04:48.874 "name": "static" 00:04:48.874 } 00:04:48.874 } 00:04:48.874 ] 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "subsystem": "vhost_scsi", 00:04:48.874 "config": [] 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "subsystem": "vhost_blk", 00:04:48.874 "config": [] 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "subsystem": "ublk", 00:04:48.874 "config": [] 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "subsystem": "nbd", 00:04:48.874 "config": [] 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "subsystem": "nvmf", 00:04:48.874 "config": [ 00:04:48.874 { 00:04:48.874 "method": "nvmf_set_config", 00:04:48.874 "params": { 00:04:48.874 "discovery_filter": "match_any", 00:04:48.874 "admin_cmd_passthru": { 00:04:48.874 "identify_ctrlr": false 00:04:48.874 } 00:04:48.874 } 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "method": "nvmf_set_max_subsystems", 00:04:48.874 "params": { 00:04:48.874 "max_subsystems": 1024 00:04:48.874 } 00:04:48.874 }, 00:04:48.874 { 00:04:48.874 "method": "nvmf_set_crdt", 00:04:48.874 "params": { 00:04:48.874 "crdt1": 0, 00:04:48.874 "crdt2": 0, 00:04:48.874 "crdt3": 0 00:04:48.874 } 00:04:48.874 }, 00:04:48.875 { 00:04:48.875 "method": "nvmf_create_transport", 00:04:48.875 "params": { 00:04:48.875 "trtype": "TCP", 00:04:48.875 "max_queue_depth": 128, 00:04:48.875 "max_io_qpairs_per_ctrlr": 127, 00:04:48.875 "in_capsule_data_size": 4096, 00:04:48.875 "max_io_size": 131072, 00:04:48.875 "io_unit_size": 131072, 00:04:48.875 "max_aq_depth": 128, 00:04:48.875 "num_shared_buffers": 511, 00:04:48.875 "buf_cache_size": 4294967295, 00:04:48.875 "dif_insert_or_strip": false, 00:04:48.875 "zcopy": false, 00:04:48.875 "c2h_success": true, 00:04:48.875 "sock_priority": 0, 00:04:48.875 "abort_timeout_sec": 1, 00:04:48.875 "ack_timeout": 0, 00:04:48.875 "data_wr_pool_size": 0 00:04:48.875 } 00:04:48.875 } 00:04:48.875 ] 00:04:48.875 }, 00:04:48.875 { 00:04:48.875 "subsystem": "iscsi", 00:04:48.875 "config": [ 00:04:48.875 { 00:04:48.875 "method": "iscsi_set_options", 00:04:48.875 "params": { 00:04:48.875 "node_base": "iqn.2016-06.io.spdk", 00:04:48.875 "max_sessions": 128, 00:04:48.875 "max_connections_per_session": 2, 00:04:48.875 "max_queue_depth": 64, 00:04:48.875 "default_time2wait": 2, 00:04:48.875 "default_time2retain": 20, 00:04:48.875 "first_burst_length": 8192, 00:04:48.875 "immediate_data": true, 00:04:48.875 "allow_duplicated_isid": false, 00:04:48.875 "error_recovery_level": 0, 00:04:48.875 "nop_timeout": 60, 00:04:48.875 "nop_in_interval": 30, 00:04:48.875 "disable_chap": false, 00:04:48.875 "require_chap": false, 00:04:48.875 "mutual_chap": false, 00:04:48.875 "chap_group": 0, 00:04:48.875 "max_large_datain_per_connection": 64, 00:04:48.875 "max_r2t_per_connection": 4, 00:04:48.875 "pdu_pool_size": 36864, 00:04:48.875 "immediate_data_pool_size": 16384, 00:04:48.875 "data_out_pool_size": 2048 00:04:48.875 } 00:04:48.875 } 00:04:48.875 ] 00:04:48.875 } 00:04:48.875 ] 00:04:48.875 } 00:04:48.875 19:40:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:48.875 19:40:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1871241 00:04:48.875 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1871241 ']' 00:04:48.875 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1871241 00:04:48.875 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:48.875 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.875 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1871241 00:04:48.875 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:48.875 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:48.875 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1871241' 00:04:48.875 killing process with pid 1871241 00:04:48.875 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1871241 00:04:48.875 19:40:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1871241 00:04:49.135 19:40:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1871477 00:04:49.135 19:40:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:49.135 19:40:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:54.408 19:40:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1871477 00:04:54.408 19:40:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1871477 ']' 00:04:54.408 19:40:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1871477 00:04:54.408 19:40:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:54.408 19:40:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.408 19:40:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1871477 00:04:54.408 19:40:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.408 19:40:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.408 19:40:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1871477' 00:04:54.408 killing process with pid 1871477 00:04:54.408 19:40:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1871477 00:04:54.408 19:40:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1871477 00:04:54.408 19:40:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:54.408 19:40:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:54.408 00:04:54.408 real 0m6.714s 00:04:54.408 user 0m6.559s 00:04:54.408 sys 0m0.549s 00:04:54.408 19:40:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.408 19:40:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.408 ************************************ 00:04:54.408 END TEST skip_rpc_with_json 00:04:54.408 ************************************ 00:04:54.408 19:40:46 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:54.408 19:40:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.408 19:40:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.408 19:40:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.678 ************************************ 00:04:54.678 START TEST skip_rpc_with_delay 00:04:54.678 ************************************ 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.678 [2024-07-24 19:40:46.090432] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:54.678 [2024-07-24 19:40:46.090489] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:54.678 00:04:54.678 real 0m0.061s 00:04:54.678 user 0m0.038s 00:04:54.678 sys 0m0.021s 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.678 19:40:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:54.678 ************************************ 00:04:54.678 END TEST skip_rpc_with_delay 00:04:54.678 ************************************ 00:04:54.678 19:40:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:54.678 19:40:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:54.678 19:40:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:54.678 19:40:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.678 19:40:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.678 19:40:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.678 ************************************ 00:04:54.678 START TEST exit_on_failed_rpc_init 00:04:54.678 ************************************ 00:04:54.678 19:40:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:54.678 19:40:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1872450 00:04:54.678 19:40:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1872450 00:04:54.678 19:40:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1872450 ']' 00:04:54.678 19:40:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.678 19:40:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.678 19:40:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.678 19:40:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.678 19:40:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.678 19:40:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.678 [2024-07-24 19:40:46.217846] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:04:54.678 [2024-07-24 19:40:46.217890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872450 ] 00:04:54.678 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.678 [2024-07-24 19:40:46.270439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.938 [2024-07-24 19:40:46.350459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.506 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.506 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:55.506 19:40:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.506 19:40:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.506 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:55.506 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.506 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.506 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.506 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.506 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.506 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.506 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.506 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.506 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:55.506 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.506 [2024-07-24 19:40:47.062387] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:04:55.506 [2024-07-24 19:40:47.062434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872601 ] 00:04:55.506 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.835 [2024-07-24 19:40:47.114738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.835 [2024-07-24 19:40:47.187743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.835 [2024-07-24 19:40:47.187825] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:55.835 [2024-07-24 19:40:47.187835] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:55.835 [2024-07-24 19:40:47.187841] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1872450 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1872450 ']' 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1872450 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1872450 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1872450' 00:04:55.835 killing process with pid 1872450 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1872450 00:04:55.835 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1872450 00:04:56.095 00:04:56.095 real 0m1.440s 00:04:56.095 user 0m1.679s 00:04:56.095 sys 0m0.361s 00:04:56.095 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.095 19:40:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.095 ************************************ 00:04:56.095 END TEST exit_on_failed_rpc_init 00:04:56.095 ************************************ 00:04:56.095 19:40:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:56.095 00:04:56.095 real 0m13.916s 00:04:56.095 user 0m13.527s 00:04:56.095 sys 0m1.423s 00:04:56.095 19:40:47 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.095 19:40:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.095 ************************************ 00:04:56.095 END TEST skip_rpc 00:04:56.096 ************************************ 00:04:56.096 19:40:47 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:56.096 19:40:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.096 19:40:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.096 19:40:47 -- common/autotest_common.sh@10 -- # set +x 00:04:56.355 ************************************ 00:04:56.355 START TEST rpc_client 00:04:56.355 ************************************ 00:04:56.355 19:40:47 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:56.355 * Looking for test storage... 00:04:56.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:56.355 19:40:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:56.355 OK 00:04:56.355 19:40:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:56.355 00:04:56.355 real 0m0.111s 00:04:56.355 user 0m0.043s 00:04:56.355 sys 0m0.075s 00:04:56.355 19:40:47 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.355 19:40:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:56.355 ************************************ 00:04:56.355 END TEST rpc_client 00:04:56.355 ************************************ 00:04:56.355 19:40:47 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:56.355 19:40:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.355 19:40:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.355 19:40:47 -- common/autotest_common.sh@10 -- # set +x 00:04:56.355 ************************************ 00:04:56.355 START TEST json_config 00:04:56.355 ************************************ 00:04:56.355 19:40:47 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:56.355 19:40:47 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:56.355 19:40:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:56.355 19:40:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.355 19:40:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.355 19:40:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.355 19:40:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.355 19:40:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.355 19:40:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.355 19:40:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.355 19:40:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.355 19:40:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.355 19:40:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.616 19:40:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:56.616 19:40:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:56.616 19:40:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.616 19:40:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.616 19:40:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.616 19:40:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.616 19:40:47 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:56.616 19:40:47 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.616 19:40:47 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.616 19:40:47 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.616 19:40:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.616 19:40:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.616 19:40:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.616 19:40:47 json_config -- paths/export.sh@5 -- # export PATH 00:04:56.616 19:40:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.616 19:40:47 json_config -- nvmf/common.sh@47 -- # : 0 00:04:56.616 19:40:47 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:56.616 19:40:47 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:56.616 19:40:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.616 19:40:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.616 19:40:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.616 19:40:47 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:56.616 19:40:47 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:56.616 19:40:47 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:56.616 INFO: JSON configuration test init 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:56.616 19:40:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.616 19:40:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:56.616 19:40:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.616 19:40:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.616 19:40:47 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:56.616 19:40:47 json_config -- json_config/common.sh@9 -- # local app=target 00:04:56.616 19:40:47 json_config -- json_config/common.sh@10 -- # shift 00:04:56.616 19:40:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.616 19:40:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.616 19:40:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.616 19:40:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.616 19:40:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.616 19:40:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1872805 00:04:56.616 19:40:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.616 Waiting for target to run... 00:04:56.616 19:40:47 json_config -- json_config/common.sh@25 -- # waitforlisten 1872805 /var/tmp/spdk_tgt.sock 00:04:56.616 19:40:47 json_config -- common/autotest_common.sh@831 -- # '[' -z 1872805 ']' 00:04:56.616 19:40:47 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.616 19:40:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:56.616 19:40:47 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.616 19:40:47 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.616 19:40:47 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.616 19:40:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.616 [2024-07-24 19:40:48.032635] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:04:56.616 [2024-07-24 19:40:48.032685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872805 ] 00:04:56.616 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.875 [2024-07-24 19:40:48.471569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.133 [2024-07-24 19:40:48.561136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.392 19:40:48 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:57.392 19:40:48 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:57.392 19:40:48 json_config -- json_config/common.sh@26 -- # echo '' 00:04:57.392 00:04:57.392 19:40:48 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:57.392 19:40:48 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:57.392 19:40:48 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.392 19:40:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.392 19:40:48 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:57.392 19:40:48 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:57.392 19:40:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:57.392 19:40:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.392 19:40:48 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:57.392 19:40:48 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:57.392 19:40:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:00.685 19:40:51 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:00.685 19:40:51 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:00.685 19:40:51 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.685 19:40:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.685 19:40:51 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:00.685 19:40:51 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:00.685 19:40:51 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:00.685 19:40:51 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:00.685 19:40:51 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:00.685 19:40:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@51 -- # sort 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:00.685 19:40:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.685 19:40:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:00.685 19:40:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.685 19:40:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:00.685 19:40:52 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:00.685 19:40:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:00.952 MallocForNvmf0 00:05:00.952 19:40:52 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:00.952 19:40:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:00.952 MallocForNvmf1 00:05:00.952 19:40:52 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:00.952 19:40:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:01.212 [2024-07-24 19:40:52.676614] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.212 19:40:52 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:01.212 19:40:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:01.470 19:40:52 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:01.470 19:40:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:01.470 19:40:53 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:01.470 19:40:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:01.730 19:40:53 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:01.730 19:40:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:01.989 [2024-07-24 19:40:53.358734] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:01.989 19:40:53 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:01.989 19:40:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.989 19:40:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.989 19:40:53 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:01.989 19:40:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.989 19:40:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.989 19:40:53 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:01.989 19:40:53 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:01.989 19:40:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:02.247 MallocBdevForConfigChangeCheck 00:05:02.247 19:40:53 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:02.247 19:40:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.247 19:40:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.247 19:40:53 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:02.248 19:40:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.506 19:40:53 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:02.506 INFO: shutting down applications... 00:05:02.506 19:40:53 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:02.506 19:40:53 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:02.506 19:40:53 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:02.506 19:40:53 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:04.413 Calling clear_iscsi_subsystem 00:05:04.413 Calling clear_nvmf_subsystem 00:05:04.413 Calling clear_nbd_subsystem 00:05:04.413 Calling clear_ublk_subsystem 00:05:04.413 Calling clear_vhost_blk_subsystem 00:05:04.413 Calling clear_vhost_scsi_subsystem 00:05:04.413 Calling clear_bdev_subsystem 00:05:04.413 19:40:55 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:04.413 19:40:55 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:04.413 19:40:55 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:04.413 19:40:55 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.413 19:40:55 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:04.413 19:40:55 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:04.413 19:40:55 json_config -- json_config/json_config.sh@349 -- # break 00:05:04.413 19:40:55 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:04.413 19:40:55 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:04.413 19:40:55 json_config -- json_config/common.sh@31 -- # local app=target 00:05:04.413 19:40:55 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:04.413 19:40:55 json_config -- json_config/common.sh@35 -- # [[ -n 1872805 ]] 00:05:04.413 19:40:55 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1872805 00:05:04.413 19:40:55 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:04.413 19:40:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.414 19:40:55 json_config -- json_config/common.sh@41 -- # kill -0 1872805 00:05:04.414 19:40:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.983 19:40:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.983 19:40:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.983 19:40:56 json_config -- json_config/common.sh@41 -- # kill -0 1872805 00:05:04.983 19:40:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:04.983 19:40:56 json_config -- json_config/common.sh@43 -- # break 00:05:04.983 19:40:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:04.983 19:40:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:04.983 SPDK target shutdown done 00:05:04.983 19:40:56 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:04.983 INFO: relaunching applications... 00:05:04.983 19:40:56 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.983 19:40:56 json_config -- json_config/common.sh@9 -- # local app=target 00:05:04.983 19:40:56 json_config -- json_config/common.sh@10 -- # shift 00:05:04.983 19:40:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:04.983 19:40:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:04.983 19:40:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:04.983 19:40:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.983 19:40:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.983 19:40:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1874329 00:05:04.983 19:40:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:04.983 Waiting for target to run... 00:05:04.983 19:40:56 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.983 19:40:56 json_config -- json_config/common.sh@25 -- # waitforlisten 1874329 /var/tmp/spdk_tgt.sock 00:05:04.983 19:40:56 json_config -- common/autotest_common.sh@831 -- # '[' -z 1874329 ']' 00:05:04.983 19:40:56 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:04.983 19:40:56 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.983 19:40:56 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:04.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:04.984 19:40:56 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.984 19:40:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.984 [2024-07-24 19:40:56.421960] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:04.984 [2024-07-24 19:40:56.422014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1874329 ] 00:05:04.984 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.243 [2024-07-24 19:40:56.698706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.243 [2024-07-24 19:40:56.764970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.526 [2024-07-24 19:40:59.774342] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.526 [2024-07-24 19:40:59.806656] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:08.526 19:40:59 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.526 19:40:59 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:08.526 19:40:59 json_config -- json_config/common.sh@26 -- # echo '' 00:05:08.526 00:05:08.526 19:40:59 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:08.526 19:40:59 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:08.526 INFO: Checking if target configuration is the same... 00:05:08.526 19:40:59 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.526 19:40:59 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:08.526 19:40:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.526 + '[' 2 -ne 2 ']' 00:05:08.526 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:08.526 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:08.526 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:08.526 +++ basename /dev/fd/62 00:05:08.526 ++ mktemp /tmp/62.XXX 00:05:08.526 + tmp_file_1=/tmp/62.rm5 00:05:08.526 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.526 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:08.526 + tmp_file_2=/tmp/spdk_tgt_config.json.eay 00:05:08.526 + ret=0 00:05:08.526 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:08.784 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:08.784 + diff -u /tmp/62.rm5 /tmp/spdk_tgt_config.json.eay 00:05:08.784 + echo 'INFO: JSON config files are the same' 00:05:08.784 INFO: JSON config files are the same 00:05:08.784 + rm /tmp/62.rm5 /tmp/spdk_tgt_config.json.eay 00:05:08.784 + exit 0 00:05:08.784 19:41:00 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:08.784 19:41:00 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:08.784 INFO: changing configuration and checking if this can be detected... 00:05:08.784 19:41:00 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:08.784 19:41:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:08.784 19:41:00 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.784 19:41:00 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:08.784 19:41:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.043 + '[' 2 -ne 2 ']' 00:05:09.043 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:09.043 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:09.043 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:09.043 +++ basename /dev/fd/62 00:05:09.043 ++ mktemp /tmp/62.XXX 00:05:09.043 + tmp_file_1=/tmp/62.PfC 00:05:09.043 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.043 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:09.043 + tmp_file_2=/tmp/spdk_tgt_config.json.tk8 00:05:09.043 + ret=0 00:05:09.043 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:09.302 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:09.302 + diff -u /tmp/62.PfC /tmp/spdk_tgt_config.json.tk8 00:05:09.302 + ret=1 00:05:09.302 + echo '=== Start of file: /tmp/62.PfC ===' 00:05:09.302 + cat /tmp/62.PfC 00:05:09.302 + echo '=== End of file: /tmp/62.PfC ===' 00:05:09.302 + echo '' 00:05:09.302 + echo '=== Start of file: /tmp/spdk_tgt_config.json.tk8 ===' 00:05:09.302 + cat /tmp/spdk_tgt_config.json.tk8 00:05:09.302 + echo '=== End of file: /tmp/spdk_tgt_config.json.tk8 ===' 00:05:09.302 + echo '' 00:05:09.302 + rm /tmp/62.PfC /tmp/spdk_tgt_config.json.tk8 00:05:09.302 + exit 1 00:05:09.302 19:41:00 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:09.302 INFO: configuration change detected. 00:05:09.302 19:41:00 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:09.302 19:41:00 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:09.302 19:41:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.302 19:41:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.302 19:41:00 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:09.302 19:41:00 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:09.302 19:41:00 json_config -- json_config/json_config.sh@321 -- # [[ -n 1874329 ]] 00:05:09.302 19:41:00 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:09.302 19:41:00 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:09.302 19:41:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.302 19:41:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.302 19:41:00 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:09.302 19:41:00 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:09.302 19:41:00 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:09.302 19:41:00 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:09.302 19:41:00 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:09.302 19:41:00 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:09.302 19:41:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:09.302 19:41:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.302 19:41:00 json_config -- json_config/json_config.sh@327 -- # killprocess 1874329 00:05:09.302 19:41:00 json_config -- common/autotest_common.sh@950 -- # '[' -z 1874329 ']' 00:05:09.302 19:41:00 json_config -- common/autotest_common.sh@954 -- # kill -0 1874329 00:05:09.302 19:41:00 json_config -- common/autotest_common.sh@955 -- # uname 00:05:09.302 19:41:00 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.302 19:41:00 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1874329 00:05:09.302 19:41:00 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:09.302 19:41:00 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:09.302 19:41:00 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1874329' 00:05:09.302 killing process with pid 1874329 00:05:09.302 19:41:00 json_config -- common/autotest_common.sh@969 -- # kill 1874329 00:05:09.302 19:41:00 json_config -- common/autotest_common.sh@974 -- # wait 1874329 00:05:11.208 19:41:02 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.208 19:41:02 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:11.208 19:41:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.208 19:41:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.208 19:41:02 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:11.208 19:41:02 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:11.208 INFO: Success 00:05:11.208 00:05:11.208 real 0m14.466s 00:05:11.208 user 0m15.120s 00:05:11.208 sys 0m1.817s 00:05:11.208 19:41:02 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.208 19:41:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.208 ************************************ 00:05:11.208 END TEST json_config 00:05:11.208 ************************************ 00:05:11.208 19:41:02 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:11.208 19:41:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.208 19:41:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.208 19:41:02 -- common/autotest_common.sh@10 -- # set +x 00:05:11.208 ************************************ 00:05:11.208 START TEST json_config_extra_key 00:05:11.208 ************************************ 00:05:11.208 19:41:02 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:11.208 19:41:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:11.208 19:41:02 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.208 19:41:02 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.208 19:41:02 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.208 19:41:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.208 19:41:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.208 19:41:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.208 19:41:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:11.208 19:41:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:11.208 19:41:02 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:11.208 19:41:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:11.208 19:41:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:11.208 19:41:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:11.208 19:41:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:11.208 19:41:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:11.208 19:41:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:11.208 19:41:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:11.208 19:41:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:11.208 19:41:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:11.208 19:41:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:11.208 19:41:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:11.208 INFO: launching applications... 00:05:11.208 19:41:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:11.208 19:41:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:11.208 19:41:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:11.208 19:41:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:11.208 19:41:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:11.208 19:41:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:11.208 19:41:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.208 19:41:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.208 19:41:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1875596 00:05:11.208 19:41:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:11.208 Waiting for target to run... 00:05:11.208 19:41:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1875596 /var/tmp/spdk_tgt.sock 00:05:11.208 19:41:02 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:11.208 19:41:02 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1875596 ']' 00:05:11.208 19:41:02 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.208 19:41:02 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.208 19:41:02 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.208 19:41:02 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.208 19:41:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:11.208 [2024-07-24 19:41:02.557248] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:11.208 [2024-07-24 19:41:02.557302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1875596 ] 00:05:11.208 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.468 [2024-07-24 19:41:02.988222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.727 [2024-07-24 19:41:03.080073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.988 19:41:03 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.988 19:41:03 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:11.988 19:41:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:11.988 00:05:11.988 19:41:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:11.988 INFO: shutting down applications... 00:05:11.988 19:41:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:11.988 19:41:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:11.988 19:41:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:11.988 19:41:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1875596 ]] 00:05:11.988 19:41:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1875596 00:05:11.988 19:41:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:11.988 19:41:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.988 19:41:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1875596 00:05:11.988 19:41:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.556 19:41:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.556 19:41:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.556 19:41:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1875596 00:05:12.556 19:41:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:12.556 19:41:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:12.556 19:41:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:12.556 19:41:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:12.556 SPDK target shutdown done 00:05:12.556 19:41:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:12.556 Success 00:05:12.556 00:05:12.556 real 0m1.440s 00:05:12.556 user 0m1.054s 00:05:12.556 sys 0m0.537s 00:05:12.556 19:41:03 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.556 19:41:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:12.556 ************************************ 00:05:12.556 END TEST json_config_extra_key 00:05:12.556 ************************************ 00:05:12.556 19:41:03 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.556 19:41:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.556 19:41:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.556 19:41:03 -- common/autotest_common.sh@10 -- # set +x 00:05:12.556 ************************************ 00:05:12.556 START TEST alias_rpc 00:05:12.556 ************************************ 00:05:12.556 19:41:03 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.556 * Looking for test storage... 00:05:12.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:12.556 19:41:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:12.556 19:41:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1875872 00:05:12.556 19:41:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1875872 00:05:12.556 19:41:03 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1875872 ']' 00:05:12.556 19:41:03 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.556 19:41:03 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.556 19:41:03 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.556 19:41:03 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.556 19:41:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.556 19:41:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.556 [2024-07-24 19:41:04.039855] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:12.556 [2024-07-24 19:41:04.039898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1875872 ] 00:05:12.556 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.556 [2024-07-24 19:41:04.092921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.815 [2024-07-24 19:41:04.174402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.382 19:41:04 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.383 19:41:04 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:13.383 19:41:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:13.642 19:41:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1875872 00:05:13.642 19:41:05 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1875872 ']' 00:05:13.642 19:41:05 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1875872 00:05:13.642 19:41:05 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:13.642 19:41:05 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.642 19:41:05 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1875872 00:05:13.642 19:41:05 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.642 19:41:05 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.642 19:41:05 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1875872' 00:05:13.642 killing process with pid 1875872 00:05:13.642 19:41:05 alias_rpc -- common/autotest_common.sh@969 -- # kill 1875872 00:05:13.642 19:41:05 alias_rpc -- common/autotest_common.sh@974 -- # wait 1875872 00:05:13.902 00:05:13.902 real 0m1.447s 00:05:13.902 user 0m1.611s 00:05:13.902 sys 0m0.357s 00:05:13.902 19:41:05 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.902 19:41:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.902 ************************************ 00:05:13.902 END TEST alias_rpc 00:05:13.902 ************************************ 00:05:13.902 19:41:05 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:13.902 19:41:05 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:13.902 19:41:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.902 19:41:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.902 19:41:05 -- common/autotest_common.sh@10 -- # set +x 00:05:13.902 ************************************ 00:05:13.902 START TEST spdkcli_tcp 00:05:13.902 ************************************ 00:05:13.902 19:41:05 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:14.179 * Looking for test storage... 00:05:14.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:14.179 19:41:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:14.179 19:41:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:14.179 19:41:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:14.179 19:41:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:14.179 19:41:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:14.179 19:41:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:14.179 19:41:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:14.179 19:41:05 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.179 19:41:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.179 19:41:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1876155 00:05:14.179 19:41:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1876155 00:05:14.179 19:41:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:14.179 19:41:05 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1876155 ']' 00:05:14.179 19:41:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.179 19:41:05 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.179 19:41:05 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.179 19:41:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.179 19:41:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.179 [2024-07-24 19:41:05.578010] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:14.179 [2024-07-24 19:41:05.578066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1876155 ] 00:05:14.179 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.179 [2024-07-24 19:41:05.632944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.179 [2024-07-24 19:41:05.709065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.179 [2024-07-24 19:41:05.709068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.126 19:41:06 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.126 19:41:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:15.126 19:41:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1876388 00:05:15.126 19:41:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:15.126 19:41:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:15.126 [ 00:05:15.126 "bdev_malloc_delete", 00:05:15.126 "bdev_malloc_create", 00:05:15.126 "bdev_null_resize", 00:05:15.126 "bdev_null_delete", 00:05:15.126 "bdev_null_create", 00:05:15.126 "bdev_nvme_cuse_unregister", 00:05:15.126 "bdev_nvme_cuse_register", 00:05:15.126 "bdev_opal_new_user", 00:05:15.126 "bdev_opal_set_lock_state", 00:05:15.126 "bdev_opal_delete", 00:05:15.126 "bdev_opal_get_info", 00:05:15.126 "bdev_opal_create", 00:05:15.126 "bdev_nvme_opal_revert", 00:05:15.126 "bdev_nvme_opal_init", 00:05:15.126 "bdev_nvme_send_cmd", 00:05:15.126 "bdev_nvme_get_path_iostat", 00:05:15.126 "bdev_nvme_get_mdns_discovery_info", 00:05:15.126 "bdev_nvme_stop_mdns_discovery", 00:05:15.126 "bdev_nvme_start_mdns_discovery", 00:05:15.126 "bdev_nvme_set_multipath_policy", 00:05:15.126 "bdev_nvme_set_preferred_path", 00:05:15.126 "bdev_nvme_get_io_paths", 00:05:15.126 "bdev_nvme_remove_error_injection", 00:05:15.126 "bdev_nvme_add_error_injection", 00:05:15.126 "bdev_nvme_get_discovery_info", 00:05:15.126 "bdev_nvme_stop_discovery", 00:05:15.126 "bdev_nvme_start_discovery", 00:05:15.126 "bdev_nvme_get_controller_health_info", 00:05:15.126 "bdev_nvme_disable_controller", 00:05:15.126 "bdev_nvme_enable_controller", 00:05:15.126 "bdev_nvme_reset_controller", 00:05:15.126 "bdev_nvme_get_transport_statistics", 00:05:15.126 "bdev_nvme_apply_firmware", 00:05:15.126 "bdev_nvme_detach_controller", 00:05:15.126 "bdev_nvme_get_controllers", 00:05:15.126 "bdev_nvme_attach_controller", 00:05:15.126 "bdev_nvme_set_hotplug", 00:05:15.126 "bdev_nvme_set_options", 00:05:15.126 "bdev_passthru_delete", 00:05:15.126 "bdev_passthru_create", 00:05:15.126 "bdev_lvol_set_parent_bdev", 00:05:15.126 "bdev_lvol_set_parent", 00:05:15.126 "bdev_lvol_check_shallow_copy", 00:05:15.126 "bdev_lvol_start_shallow_copy", 00:05:15.126 "bdev_lvol_grow_lvstore", 00:05:15.126 "bdev_lvol_get_lvols", 00:05:15.126 "bdev_lvol_get_lvstores", 00:05:15.126 "bdev_lvol_delete", 00:05:15.126 "bdev_lvol_set_read_only", 00:05:15.126 "bdev_lvol_resize", 00:05:15.126 "bdev_lvol_decouple_parent", 00:05:15.126 "bdev_lvol_inflate", 00:05:15.126 "bdev_lvol_rename", 00:05:15.126 "bdev_lvol_clone_bdev", 00:05:15.126 "bdev_lvol_clone", 00:05:15.126 "bdev_lvol_snapshot", 00:05:15.126 "bdev_lvol_create", 00:05:15.126 "bdev_lvol_delete_lvstore", 00:05:15.126 "bdev_lvol_rename_lvstore", 00:05:15.126 "bdev_lvol_create_lvstore", 00:05:15.126 "bdev_raid_set_options", 00:05:15.126 "bdev_raid_remove_base_bdev", 00:05:15.126 "bdev_raid_add_base_bdev", 00:05:15.126 "bdev_raid_delete", 00:05:15.126 "bdev_raid_create", 00:05:15.126 "bdev_raid_get_bdevs", 00:05:15.126 "bdev_error_inject_error", 00:05:15.126 "bdev_error_delete", 00:05:15.126 "bdev_error_create", 00:05:15.126 "bdev_split_delete", 00:05:15.126 "bdev_split_create", 00:05:15.126 "bdev_delay_delete", 00:05:15.126 "bdev_delay_create", 00:05:15.126 "bdev_delay_update_latency", 00:05:15.126 "bdev_zone_block_delete", 00:05:15.126 "bdev_zone_block_create", 00:05:15.126 "blobfs_create", 00:05:15.126 "blobfs_detect", 00:05:15.126 "blobfs_set_cache_size", 00:05:15.126 "bdev_aio_delete", 00:05:15.126 "bdev_aio_rescan", 00:05:15.126 "bdev_aio_create", 00:05:15.126 "bdev_ftl_set_property", 00:05:15.126 "bdev_ftl_get_properties", 00:05:15.126 "bdev_ftl_get_stats", 00:05:15.126 "bdev_ftl_unmap", 00:05:15.126 "bdev_ftl_unload", 00:05:15.126 "bdev_ftl_delete", 00:05:15.126 "bdev_ftl_load", 00:05:15.126 "bdev_ftl_create", 00:05:15.126 "bdev_virtio_attach_controller", 00:05:15.126 "bdev_virtio_scsi_get_devices", 00:05:15.126 "bdev_virtio_detach_controller", 00:05:15.126 "bdev_virtio_blk_set_hotplug", 00:05:15.126 "bdev_iscsi_delete", 00:05:15.126 "bdev_iscsi_create", 00:05:15.126 "bdev_iscsi_set_options", 00:05:15.126 "accel_error_inject_error", 00:05:15.126 "ioat_scan_accel_module", 00:05:15.126 "dsa_scan_accel_module", 00:05:15.126 "iaa_scan_accel_module", 00:05:15.126 "vfu_virtio_create_scsi_endpoint", 00:05:15.126 "vfu_virtio_scsi_remove_target", 00:05:15.126 "vfu_virtio_scsi_add_target", 00:05:15.126 "vfu_virtio_create_blk_endpoint", 00:05:15.126 "vfu_virtio_delete_endpoint", 00:05:15.126 "keyring_file_remove_key", 00:05:15.126 "keyring_file_add_key", 00:05:15.126 "keyring_linux_set_options", 00:05:15.126 "iscsi_get_histogram", 00:05:15.126 "iscsi_enable_histogram", 00:05:15.126 "iscsi_set_options", 00:05:15.126 "iscsi_get_auth_groups", 00:05:15.126 "iscsi_auth_group_remove_secret", 00:05:15.126 "iscsi_auth_group_add_secret", 00:05:15.126 "iscsi_delete_auth_group", 00:05:15.126 "iscsi_create_auth_group", 00:05:15.126 "iscsi_set_discovery_auth", 00:05:15.126 "iscsi_get_options", 00:05:15.126 "iscsi_target_node_request_logout", 00:05:15.126 "iscsi_target_node_set_redirect", 00:05:15.126 "iscsi_target_node_set_auth", 00:05:15.126 "iscsi_target_node_add_lun", 00:05:15.126 "iscsi_get_stats", 00:05:15.126 "iscsi_get_connections", 00:05:15.126 "iscsi_portal_group_set_auth", 00:05:15.126 "iscsi_start_portal_group", 00:05:15.126 "iscsi_delete_portal_group", 00:05:15.126 "iscsi_create_portal_group", 00:05:15.126 "iscsi_get_portal_groups", 00:05:15.126 "iscsi_delete_target_node", 00:05:15.126 "iscsi_target_node_remove_pg_ig_maps", 00:05:15.126 "iscsi_target_node_add_pg_ig_maps", 00:05:15.126 "iscsi_create_target_node", 00:05:15.126 "iscsi_get_target_nodes", 00:05:15.126 "iscsi_delete_initiator_group", 00:05:15.126 "iscsi_initiator_group_remove_initiators", 00:05:15.126 "iscsi_initiator_group_add_initiators", 00:05:15.126 "iscsi_create_initiator_group", 00:05:15.126 "iscsi_get_initiator_groups", 00:05:15.126 "nvmf_set_crdt", 00:05:15.126 "nvmf_set_config", 00:05:15.126 "nvmf_set_max_subsystems", 00:05:15.126 "nvmf_stop_mdns_prr", 00:05:15.126 "nvmf_publish_mdns_prr", 00:05:15.126 "nvmf_subsystem_get_listeners", 00:05:15.126 "nvmf_subsystem_get_qpairs", 00:05:15.126 "nvmf_subsystem_get_controllers", 00:05:15.126 "nvmf_get_stats", 00:05:15.126 "nvmf_get_transports", 00:05:15.126 "nvmf_create_transport", 00:05:15.126 "nvmf_get_targets", 00:05:15.126 "nvmf_delete_target", 00:05:15.126 "nvmf_create_target", 00:05:15.126 "nvmf_subsystem_allow_any_host", 00:05:15.126 "nvmf_subsystem_remove_host", 00:05:15.126 "nvmf_subsystem_add_host", 00:05:15.126 "nvmf_ns_remove_host", 00:05:15.126 "nvmf_ns_add_host", 00:05:15.126 "nvmf_subsystem_remove_ns", 00:05:15.126 "nvmf_subsystem_add_ns", 00:05:15.126 "nvmf_subsystem_listener_set_ana_state", 00:05:15.126 "nvmf_discovery_get_referrals", 00:05:15.126 "nvmf_discovery_remove_referral", 00:05:15.126 "nvmf_discovery_add_referral", 00:05:15.126 "nvmf_subsystem_remove_listener", 00:05:15.126 "nvmf_subsystem_add_listener", 00:05:15.126 "nvmf_delete_subsystem", 00:05:15.126 "nvmf_create_subsystem", 00:05:15.126 "nvmf_get_subsystems", 00:05:15.126 "env_dpdk_get_mem_stats", 00:05:15.126 "nbd_get_disks", 00:05:15.126 "nbd_stop_disk", 00:05:15.126 "nbd_start_disk", 00:05:15.126 "ublk_recover_disk", 00:05:15.126 "ublk_get_disks", 00:05:15.126 "ublk_stop_disk", 00:05:15.126 "ublk_start_disk", 00:05:15.126 "ublk_destroy_target", 00:05:15.126 "ublk_create_target", 00:05:15.126 "virtio_blk_create_transport", 00:05:15.126 "virtio_blk_get_transports", 00:05:15.126 "vhost_controller_set_coalescing", 00:05:15.126 "vhost_get_controllers", 00:05:15.126 "vhost_delete_controller", 00:05:15.126 "vhost_create_blk_controller", 00:05:15.126 "vhost_scsi_controller_remove_target", 00:05:15.126 "vhost_scsi_controller_add_target", 00:05:15.126 "vhost_start_scsi_controller", 00:05:15.126 "vhost_create_scsi_controller", 00:05:15.126 "thread_set_cpumask", 00:05:15.127 "framework_get_governor", 00:05:15.127 "framework_get_scheduler", 00:05:15.127 "framework_set_scheduler", 00:05:15.127 "framework_get_reactors", 00:05:15.127 "thread_get_io_channels", 00:05:15.127 "thread_get_pollers", 00:05:15.127 "thread_get_stats", 00:05:15.127 "framework_monitor_context_switch", 00:05:15.127 "spdk_kill_instance", 00:05:15.127 "log_enable_timestamps", 00:05:15.127 "log_get_flags", 00:05:15.127 "log_clear_flag", 00:05:15.127 "log_set_flag", 00:05:15.127 "log_get_level", 00:05:15.127 "log_set_level", 00:05:15.127 "log_get_print_level", 00:05:15.127 "log_set_print_level", 00:05:15.127 "framework_enable_cpumask_locks", 00:05:15.127 "framework_disable_cpumask_locks", 00:05:15.127 "framework_wait_init", 00:05:15.127 "framework_start_init", 00:05:15.127 "scsi_get_devices", 00:05:15.127 "bdev_get_histogram", 00:05:15.127 "bdev_enable_histogram", 00:05:15.127 "bdev_set_qos_limit", 00:05:15.127 "bdev_set_qd_sampling_period", 00:05:15.127 "bdev_get_bdevs", 00:05:15.127 "bdev_reset_iostat", 00:05:15.127 "bdev_get_iostat", 00:05:15.127 "bdev_examine", 00:05:15.127 "bdev_wait_for_examine", 00:05:15.127 "bdev_set_options", 00:05:15.127 "notify_get_notifications", 00:05:15.127 "notify_get_types", 00:05:15.127 "accel_get_stats", 00:05:15.127 "accel_set_options", 00:05:15.127 "accel_set_driver", 00:05:15.127 "accel_crypto_key_destroy", 00:05:15.127 "accel_crypto_keys_get", 00:05:15.127 "accel_crypto_key_create", 00:05:15.127 "accel_assign_opc", 00:05:15.127 "accel_get_module_info", 00:05:15.127 "accel_get_opc_assignments", 00:05:15.127 "vmd_rescan", 00:05:15.127 "vmd_remove_device", 00:05:15.127 "vmd_enable", 00:05:15.127 "sock_get_default_impl", 00:05:15.127 "sock_set_default_impl", 00:05:15.127 "sock_impl_set_options", 00:05:15.127 "sock_impl_get_options", 00:05:15.127 "iobuf_get_stats", 00:05:15.127 "iobuf_set_options", 00:05:15.127 "keyring_get_keys", 00:05:15.127 "framework_get_pci_devices", 00:05:15.127 "framework_get_config", 00:05:15.127 "framework_get_subsystems", 00:05:15.127 "vfu_tgt_set_base_path", 00:05:15.127 "trace_get_info", 00:05:15.127 "trace_get_tpoint_group_mask", 00:05:15.127 "trace_disable_tpoint_group", 00:05:15.127 "trace_enable_tpoint_group", 00:05:15.127 "trace_clear_tpoint_mask", 00:05:15.127 "trace_set_tpoint_mask", 00:05:15.127 "spdk_get_version", 00:05:15.127 "rpc_get_methods" 00:05:15.127 ] 00:05:15.127 19:41:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:15.127 19:41:06 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:15.127 19:41:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.127 19:41:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:15.127 19:41:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1876155 00:05:15.127 19:41:06 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1876155 ']' 00:05:15.127 19:41:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1876155 00:05:15.127 19:41:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:15.127 19:41:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.127 19:41:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1876155 00:05:15.127 19:41:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:15.127 19:41:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:15.127 19:41:06 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1876155' 00:05:15.127 killing process with pid 1876155 00:05:15.127 19:41:06 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1876155 00:05:15.127 19:41:06 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1876155 00:05:15.386 00:05:15.386 real 0m1.468s 00:05:15.386 user 0m2.718s 00:05:15.386 sys 0m0.411s 00:05:15.386 19:41:06 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.386 19:41:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.386 ************************************ 00:05:15.386 END TEST spdkcli_tcp 00:05:15.386 ************************************ 00:05:15.386 19:41:06 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.386 19:41:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.386 19:41:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.386 19:41:06 -- common/autotest_common.sh@10 -- # set +x 00:05:15.386 ************************************ 00:05:15.386 START TEST dpdk_mem_utility 00:05:15.386 ************************************ 00:05:15.386 19:41:06 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.644 * Looking for test storage... 00:05:15.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:15.644 19:41:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:15.644 19:41:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1876462 00:05:15.644 19:41:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1876462 00:05:15.644 19:41:07 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1876462 ']' 00:05:15.644 19:41:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.644 19:41:07 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.644 19:41:07 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.644 19:41:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.644 19:41:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.644 19:41:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.644 [2024-07-24 19:41:07.110551] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:15.644 [2024-07-24 19:41:07.110599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1876462 ] 00:05:15.644 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.644 [2024-07-24 19:41:07.163398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.902 [2024-07-24 19:41:07.243327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.470 19:41:07 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.470 19:41:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:16.470 19:41:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:16.470 19:41:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:16.470 19:41:07 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.470 19:41:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.470 { 00:05:16.470 "filename": "/tmp/spdk_mem_dump.txt" 00:05:16.470 } 00:05:16.470 19:41:07 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.470 19:41:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:16.470 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:16.470 1 heaps totaling size 814.000000 MiB 00:05:16.470 size: 814.000000 MiB heap id: 0 00:05:16.470 end heaps---------- 00:05:16.470 8 mempools totaling size 598.116089 MiB 00:05:16.470 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:16.470 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:16.470 size: 84.521057 MiB name: bdev_io_1876462 00:05:16.470 size: 51.011292 MiB name: evtpool_1876462 00:05:16.470 size: 50.003479 MiB name: msgpool_1876462 00:05:16.470 size: 21.763794 MiB name: PDU_Pool 00:05:16.471 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:16.471 size: 0.026123 MiB name: Session_Pool 00:05:16.471 end mempools------- 00:05:16.471 6 memzones totaling size 4.142822 MiB 00:05:16.471 size: 1.000366 MiB name: RG_ring_0_1876462 00:05:16.471 size: 1.000366 MiB name: RG_ring_1_1876462 00:05:16.471 size: 1.000366 MiB name: RG_ring_4_1876462 00:05:16.471 size: 1.000366 MiB name: RG_ring_5_1876462 00:05:16.471 size: 0.125366 MiB name: RG_ring_2_1876462 00:05:16.471 size: 0.015991 MiB name: RG_ring_3_1876462 00:05:16.471 end memzones------- 00:05:16.471 19:41:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:16.471 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:16.471 list of free elements. size: 12.519348 MiB 00:05:16.471 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:16.471 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:16.471 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:16.471 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:16.471 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:16.471 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:16.471 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:16.471 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:16.471 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:16.471 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:16.471 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:16.471 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:16.471 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:16.471 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:16.471 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:16.471 list of standard malloc elements. size: 199.218079 MiB 00:05:16.471 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:16.471 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:16.471 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:16.471 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:16.471 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:16.471 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:16.471 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:16.471 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:16.471 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:16.471 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:16.471 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:16.471 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:16.471 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:16.471 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:16.471 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:16.471 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:16.471 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:16.471 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:16.471 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:16.471 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:16.471 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:16.471 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:16.471 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:16.471 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:16.471 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:16.471 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:16.471 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:16.471 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:16.471 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:16.471 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:16.471 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:16.471 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:16.471 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:16.471 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:16.471 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:16.471 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:16.471 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:16.471 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:16.471 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:16.471 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:16.471 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:16.471 list of memzone associated elements. size: 602.262573 MiB 00:05:16.471 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:16.471 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:16.471 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:16.471 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:16.471 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:16.471 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1876462_0 00:05:16.471 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:16.471 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1876462_0 00:05:16.471 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:16.471 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1876462_0 00:05:16.471 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:16.471 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:16.471 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:16.471 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:16.471 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:16.471 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1876462 00:05:16.471 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:16.471 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1876462 00:05:16.471 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:16.471 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1876462 00:05:16.471 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:16.471 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:16.471 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:16.471 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:16.471 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:16.471 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:16.471 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:16.471 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:16.471 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:16.471 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1876462 00:05:16.471 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:16.471 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1876462 00:05:16.471 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:16.471 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1876462 00:05:16.471 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:16.471 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1876462 00:05:16.471 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:16.471 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1876462 00:05:16.471 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:16.471 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:16.471 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:16.471 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:16.471 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:16.471 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:16.471 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:16.471 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1876462 00:05:16.471 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:16.471 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:16.471 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:16.471 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:16.471 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:16.471 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1876462 00:05:16.471 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:16.471 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:16.471 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:16.471 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1876462 00:05:16.471 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:16.471 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1876462 00:05:16.471 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:16.471 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:16.471 19:41:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:16.471 19:41:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1876462 00:05:16.471 19:41:08 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1876462 ']' 00:05:16.471 19:41:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1876462 00:05:16.471 19:41:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:16.471 19:41:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.472 19:41:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1876462 00:05:16.472 19:41:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.472 19:41:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.472 19:41:08 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1876462' 00:05:16.472 killing process with pid 1876462 00:05:16.472 19:41:08 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1876462 00:05:16.472 19:41:08 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1876462 00:05:17.041 00:05:17.041 real 0m1.389s 00:05:17.041 user 0m1.477s 00:05:17.041 sys 0m0.377s 00:05:17.041 19:41:08 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.041 19:41:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.041 ************************************ 00:05:17.041 END TEST dpdk_mem_utility 00:05:17.041 ************************************ 00:05:17.041 19:41:08 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:17.041 19:41:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.041 19:41:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.041 19:41:08 -- common/autotest_common.sh@10 -- # set +x 00:05:17.041 ************************************ 00:05:17.041 START TEST event 00:05:17.041 ************************************ 00:05:17.041 19:41:08 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:17.041 * Looking for test storage... 00:05:17.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:17.041 19:41:08 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:17.041 19:41:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:17.041 19:41:08 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:17.041 19:41:08 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:17.041 19:41:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.041 19:41:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.041 ************************************ 00:05:17.041 START TEST event_perf 00:05:17.041 ************************************ 00:05:17.041 19:41:08 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:17.041 Running I/O for 1 seconds...[2024-07-24 19:41:08.556712] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:17.041 [2024-07-24 19:41:08.556778] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1876753 ] 00:05:17.041 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.041 [2024-07-24 19:41:08.614573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.301 [2024-07-24 19:41:08.694934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.301 [2024-07-24 19:41:08.695032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.301 [2024-07-24 19:41:08.695051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.301 [2024-07-24 19:41:08.695056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.238 Running I/O for 1 seconds... 00:05:18.238 lcore 0: 212744 00:05:18.238 lcore 1: 212744 00:05:18.238 lcore 2: 212744 00:05:18.238 lcore 3: 212744 00:05:18.238 done. 00:05:18.238 00:05:18.238 real 0m1.229s 00:05:18.238 user 0m4.145s 00:05:18.238 sys 0m0.081s 00:05:18.238 19:41:09 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.238 19:41:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.238 ************************************ 00:05:18.238 END TEST event_perf 00:05:18.238 ************************************ 00:05:18.238 19:41:09 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:18.238 19:41:09 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:18.238 19:41:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.238 19:41:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.238 ************************************ 00:05:18.238 START TEST event_reactor 00:05:18.238 ************************************ 00:05:18.238 19:41:09 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:18.498 [2024-07-24 19:41:09.851407] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:18.498 [2024-07-24 19:41:09.851474] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1877009 ] 00:05:18.498 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.498 [2024-07-24 19:41:09.908191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.498 [2024-07-24 19:41:09.979902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.876 test_start 00:05:19.876 oneshot 00:05:19.876 tick 100 00:05:19.876 tick 100 00:05:19.876 tick 250 00:05:19.876 tick 100 00:05:19.876 tick 100 00:05:19.876 tick 100 00:05:19.876 tick 250 00:05:19.876 tick 500 00:05:19.876 tick 100 00:05:19.876 tick 100 00:05:19.876 tick 250 00:05:19.876 tick 100 00:05:19.876 tick 100 00:05:19.876 test_end 00:05:19.876 00:05:19.876 real 0m1.222s 00:05:19.876 user 0m1.141s 00:05:19.876 sys 0m0.076s 00:05:19.876 19:41:11 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.876 19:41:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:19.876 ************************************ 00:05:19.876 END TEST event_reactor 00:05:19.876 ************************************ 00:05:19.876 19:41:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:19.876 19:41:11 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:19.876 19:41:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.876 19:41:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.876 ************************************ 00:05:19.876 START TEST event_reactor_perf 00:05:19.876 ************************************ 00:05:19.876 19:41:11 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:19.876 [2024-07-24 19:41:11.136896] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:19.876 [2024-07-24 19:41:11.136966] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1877257 ] 00:05:19.876 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.876 [2024-07-24 19:41:11.192453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.876 [2024-07-24 19:41:11.265521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.812 test_start 00:05:20.812 test_end 00:05:20.812 Performance: 504028 events per second 00:05:20.812 00:05:20.812 real 0m1.217s 00:05:20.812 user 0m1.142s 00:05:20.812 sys 0m0.071s 00:05:20.812 19:41:12 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.812 19:41:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.812 ************************************ 00:05:20.812 END TEST event_reactor_perf 00:05:20.812 ************************************ 00:05:20.812 19:41:12 event -- event/event.sh@49 -- # uname -s 00:05:20.812 19:41:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:20.812 19:41:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:20.812 19:41:12 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.812 19:41:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.812 19:41:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.812 ************************************ 00:05:20.812 START TEST event_scheduler 00:05:20.812 ************************************ 00:05:20.812 19:41:12 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:21.071 * Looking for test storage... 00:05:21.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:21.071 19:41:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:21.071 19:41:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1877533 00:05:21.071 19:41:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:21.071 19:41:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.071 19:41:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1877533 00:05:21.071 19:41:12 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1877533 ']' 00:05:21.071 19:41:12 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.071 19:41:12 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.071 19:41:12 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.071 19:41:12 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.071 19:41:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.071 [2024-07-24 19:41:12.505179] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:21.071 [2024-07-24 19:41:12.505221] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1877533 ] 00:05:21.071 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.071 [2024-07-24 19:41:12.556881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.071 [2024-07-24 19:41:12.631633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.071 [2024-07-24 19:41:12.631720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.071 [2024-07-24 19:41:12.631804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.071 [2024-07-24 19:41:12.631805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.013 19:41:13 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.013 19:41:13 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:22.013 19:41:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:22.013 19:41:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.013 19:41:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.013 [2024-07-24 19:41:13.326204] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:22.013 [2024-07-24 19:41:13.326223] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:22.013 [2024-07-24 19:41:13.326233] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:22.013 [2024-07-24 19:41:13.326239] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:22.013 [2024-07-24 19:41:13.326244] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:22.013 19:41:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.013 19:41:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:22.013 19:41:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.013 19:41:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.013 [2024-07-24 19:41:13.398025] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:22.013 19:41:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.013 19:41:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:22.013 19:41:13 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.013 19:41:13 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.013 19:41:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.013 ************************************ 00:05:22.013 START TEST scheduler_create_thread 00:05:22.013 ************************************ 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.013 2 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.013 3 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.013 4 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.013 5 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.013 6 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.013 7 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.013 8 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.013 9 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:22.013 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.014 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.014 10 00:05:22.014 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.014 19:41:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:22.014 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.014 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.014 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.014 19:41:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:22.014 19:41:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:22.014 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.014 19:41:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.583 19:41:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.583 19:41:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:22.583 19:41:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.583 19:41:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.959 19:41:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.959 19:41:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:23.959 19:41:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:23.959 19:41:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.959 19:41:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.334 19:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.334 00:05:25.334 real 0m3.100s 00:05:25.334 user 0m0.023s 00:05:25.334 sys 0m0.006s 00:05:25.334 19:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.334 19:41:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.334 ************************************ 00:05:25.334 END TEST scheduler_create_thread 00:05:25.334 ************************************ 00:05:25.334 19:41:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:25.334 19:41:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1877533 00:05:25.334 19:41:16 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1877533 ']' 00:05:25.334 19:41:16 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1877533 00:05:25.334 19:41:16 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:25.334 19:41:16 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.334 19:41:16 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1877533 00:05:25.334 19:41:16 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:25.334 19:41:16 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:25.334 19:41:16 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1877533' 00:05:25.334 killing process with pid 1877533 00:05:25.334 19:41:16 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1877533 00:05:25.334 19:41:16 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1877533 00:05:25.334 [2024-07-24 19:41:16.909429] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:25.594 00:05:25.594 real 0m4.733s 00:05:25.594 user 0m9.290s 00:05:25.594 sys 0m0.342s 00:05:25.594 19:41:17 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.594 19:41:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.594 ************************************ 00:05:25.594 END TEST event_scheduler 00:05:25.594 ************************************ 00:05:25.594 19:41:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:25.594 19:41:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:25.594 19:41:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.594 19:41:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.594 19:41:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.853 ************************************ 00:05:25.853 START TEST app_repeat 00:05:25.853 ************************************ 00:05:25.853 19:41:17 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:25.853 19:41:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.853 19:41:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.853 19:41:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:25.853 19:41:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.853 19:41:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:25.853 19:41:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:25.853 19:41:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:25.853 19:41:17 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:25.853 19:41:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1878446 00:05:25.853 19:41:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.853 19:41:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1878446' 00:05:25.853 Process app_repeat pid: 1878446 00:05:25.853 19:41:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:25.853 19:41:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:25.853 spdk_app_start Round 0 00:05:25.853 19:41:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1878446 /var/tmp/spdk-nbd.sock 00:05:25.853 19:41:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1878446 ']' 00:05:25.853 19:41:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.853 19:41:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.853 19:41:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.853 19:41:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.853 19:41:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.853 [2024-07-24 19:41:17.217890] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:25.853 [2024-07-24 19:41:17.217929] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1878446 ] 00:05:25.853 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.853 [2024-07-24 19:41:17.271119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.853 [2024-07-24 19:41:17.351563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.853 [2024-07-24 19:41:17.351565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.790 19:41:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.790 19:41:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:26.790 19:41:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.790 Malloc0 00:05:26.790 19:41:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.050 Malloc1 00:05:27.050 19:41:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.050 /dev/nbd0 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.050 19:41:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.050 19:41:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:27.050 19:41:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:27.050 19:41:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:27.050 19:41:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:27.050 19:41:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:27.050 19:41:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:27.050 19:41:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:27.050 19:41:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:27.050 19:41:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.308 1+0 records in 00:05:27.308 1+0 records out 00:05:27.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250897 s, 16.3 MB/s 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:27.308 19:41:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.308 19:41:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.308 19:41:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.308 /dev/nbd1 00:05:27.308 19:41:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.308 19:41:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.308 1+0 records in 00:05:27.308 1+0 records out 00:05:27.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019973 s, 20.5 MB/s 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:27.308 19:41:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:27.308 19:41:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.308 19:41:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.308 19:41:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.308 19:41:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.308 19:41:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.567 { 00:05:27.567 "nbd_device": "/dev/nbd0", 00:05:27.567 "bdev_name": "Malloc0" 00:05:27.567 }, 00:05:27.567 { 00:05:27.567 "nbd_device": "/dev/nbd1", 00:05:27.567 "bdev_name": "Malloc1" 00:05:27.567 } 00:05:27.567 ]' 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.567 { 00:05:27.567 "nbd_device": "/dev/nbd0", 00:05:27.567 "bdev_name": "Malloc0" 00:05:27.567 }, 00:05:27.567 { 00:05:27.567 "nbd_device": "/dev/nbd1", 00:05:27.567 "bdev_name": "Malloc1" 00:05:27.567 } 00:05:27.567 ]' 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.567 /dev/nbd1' 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.567 /dev/nbd1' 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.567 256+0 records in 00:05:27.567 256+0 records out 00:05:27.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103323 s, 101 MB/s 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.567 256+0 records in 00:05:27.567 256+0 records out 00:05:27.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138496 s, 75.7 MB/s 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.567 256+0 records in 00:05:27.567 256+0 records out 00:05:27.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143134 s, 73.3 MB/s 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.567 19:41:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.568 19:41:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.568 19:41:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.568 19:41:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.568 19:41:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.568 19:41:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.568 19:41:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.568 19:41:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.568 19:41:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.568 19:41:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.826 19:41:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.826 19:41:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.826 19:41:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.826 19:41:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.826 19:41:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.826 19:41:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.826 19:41:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.826 19:41:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.826 19:41:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.826 19:41:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.085 19:41:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.085 19:41:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.085 19:41:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.085 19:41:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.085 19:41:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.085 19:41:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.085 19:41:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.085 19:41:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.085 19:41:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.085 19:41:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.085 19:41:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.345 19:41:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.345 19:41:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.345 19:41:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.345 19:41:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.345 19:41:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.345 19:41:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.345 19:41:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.345 19:41:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.345 19:41:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.345 19:41:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.345 19:41:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.345 19:41:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.345 19:41:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.345 19:41:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:28.604 [2024-07-24 19:41:20.121689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.604 [2024-07-24 19:41:20.189491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.604 [2024-07-24 19:41:20.189494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.863 [2024-07-24 19:41:20.230924] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.863 [2024-07-24 19:41:20.230971] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.428 19:41:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:31.428 19:41:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:31.428 spdk_app_start Round 1 00:05:31.428 19:41:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1878446 /var/tmp/spdk-nbd.sock 00:05:31.428 19:41:22 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1878446 ']' 00:05:31.428 19:41:22 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.428 19:41:22 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.428 19:41:22 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.428 19:41:22 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.428 19:41:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.687 19:41:23 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.687 19:41:23 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:31.687 19:41:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.687 Malloc0 00:05:31.947 19:41:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.947 Malloc1 00:05:31.947 19:41:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.947 19:41:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.947 19:41:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.947 19:41:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.947 19:41:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.947 19:41:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.947 19:41:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.947 19:41:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.947 19:41:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.947 19:41:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.947 19:41:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.947 19:41:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.947 19:41:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:31.947 19:41:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.947 19:41:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.947 19:41:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.207 /dev/nbd0 00:05:32.207 19:41:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.207 19:41:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.207 19:41:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:32.207 19:41:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:32.207 19:41:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:32.207 19:41:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:32.207 19:41:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:32.207 19:41:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:32.207 19:41:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:32.207 19:41:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:32.207 19:41:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.207 1+0 records in 00:05:32.207 1+0 records out 00:05:32.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000102274 s, 40.0 MB/s 00:05:32.207 19:41:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.207 19:41:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:32.207 19:41:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.207 19:41:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:32.207 19:41:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:32.207 19:41:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.207 19:41:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.207 19:41:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:32.465 /dev/nbd1 00:05:32.465 19:41:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.465 19:41:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.465 19:41:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:32.465 19:41:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:32.465 19:41:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:32.465 19:41:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:32.465 19:41:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:32.465 19:41:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:32.465 19:41:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:32.465 19:41:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:32.465 19:41:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.465 1+0 records in 00:05:32.465 1+0 records out 00:05:32.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208822 s, 19.6 MB/s 00:05:32.466 19:41:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.466 19:41:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:32.466 19:41:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.466 19:41:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:32.466 19:41:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:32.466 19:41:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.466 19:41:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.466 19:41:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.466 19:41:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.466 19:41:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.466 19:41:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.466 { 00:05:32.466 "nbd_device": "/dev/nbd0", 00:05:32.466 "bdev_name": "Malloc0" 00:05:32.466 }, 00:05:32.466 { 00:05:32.466 "nbd_device": "/dev/nbd1", 00:05:32.466 "bdev_name": "Malloc1" 00:05:32.466 } 00:05:32.466 ]' 00:05:32.466 19:41:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.466 { 00:05:32.466 "nbd_device": "/dev/nbd0", 00:05:32.466 "bdev_name": "Malloc0" 00:05:32.466 }, 00:05:32.466 { 00:05:32.466 "nbd_device": "/dev/nbd1", 00:05:32.466 "bdev_name": "Malloc1" 00:05:32.466 } 00:05:32.466 ]' 00:05:32.466 19:41:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.725 /dev/nbd1' 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.725 /dev/nbd1' 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.725 256+0 records in 00:05:32.725 256+0 records out 00:05:32.725 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103435 s, 101 MB/s 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.725 256+0 records in 00:05:32.725 256+0 records out 00:05:32.725 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128663 s, 81.5 MB/s 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.725 256+0 records in 00:05:32.725 256+0 records out 00:05:32.725 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144133 s, 72.8 MB/s 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.725 19:41:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.984 19:41:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.242 19:41:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.243 19:41:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.243 19:41:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.243 19:41:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.243 19:41:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.243 19:41:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.243 19:41:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:33.243 19:41:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.243 19:41:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.243 19:41:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.243 19:41:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.243 19:41:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.243 19:41:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.502 19:41:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.762 [2024-07-24 19:41:25.112877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.762 [2024-07-24 19:41:25.182038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.762 [2024-07-24 19:41:25.182040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.762 [2024-07-24 19:41:25.223736] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.762 [2024-07-24 19:41:25.223777] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.050 19:41:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.050 19:41:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:37.050 spdk_app_start Round 2 00:05:37.050 19:41:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1878446 /var/tmp/spdk-nbd.sock 00:05:37.050 19:41:27 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1878446 ']' 00:05:37.050 19:41:27 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.050 19:41:27 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.050 19:41:27 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.050 19:41:27 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.050 19:41:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.050 19:41:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.050 19:41:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:37.050 19:41:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.050 Malloc0 00:05:37.050 19:41:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.050 Malloc1 00:05:37.050 19:41:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.050 19:41:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.050 19:41:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.050 19:41:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.050 19:41:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.050 19:41:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.050 19:41:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.050 19:41:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.050 19:41:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.050 19:41:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.050 19:41:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.050 19:41:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.050 19:41:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.050 19:41:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.050 19:41:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.050 19:41:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:37.050 /dev/nbd0 00:05:37.309 19:41:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:37.309 19:41:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.309 1+0 records in 00:05:37.309 1+0 records out 00:05:37.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230804 s, 17.7 MB/s 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:37.309 19:41:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.309 19:41:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.309 19:41:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.309 /dev/nbd1 00:05:37.309 19:41:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.309 19:41:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.309 1+0 records in 00:05:37.309 1+0 records out 00:05:37.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258378 s, 15.9 MB/s 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:37.309 19:41:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:37.309 19:41:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.309 19:41:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.309 19:41:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.309 19:41:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.309 19:41:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.569 { 00:05:37.569 "nbd_device": "/dev/nbd0", 00:05:37.569 "bdev_name": "Malloc0" 00:05:37.569 }, 00:05:37.569 { 00:05:37.569 "nbd_device": "/dev/nbd1", 00:05:37.569 "bdev_name": "Malloc1" 00:05:37.569 } 00:05:37.569 ]' 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.569 { 00:05:37.569 "nbd_device": "/dev/nbd0", 00:05:37.569 "bdev_name": "Malloc0" 00:05:37.569 }, 00:05:37.569 { 00:05:37.569 "nbd_device": "/dev/nbd1", 00:05:37.569 "bdev_name": "Malloc1" 00:05:37.569 } 00:05:37.569 ]' 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.569 /dev/nbd1' 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.569 /dev/nbd1' 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.569 256+0 records in 00:05:37.569 256+0 records out 00:05:37.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104246 s, 101 MB/s 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.569 256+0 records in 00:05:37.569 256+0 records out 00:05:37.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147799 s, 70.9 MB/s 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.569 19:41:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.828 256+0 records in 00:05:37.828 256+0 records out 00:05:37.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149715 s, 70.0 MB/s 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.828 19:41:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.087 19:41:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.087 19:41:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.087 19:41:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.087 19:41:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.087 19:41:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.087 19:41:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.087 19:41:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.087 19:41:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.087 19:41:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.087 19:41:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.087 19:41:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.346 19:41:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.346 19:41:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.346 19:41:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.346 19:41:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.346 19:41:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.346 19:41:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.346 19:41:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:38.346 19:41:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.346 19:41:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.346 19:41:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.346 19:41:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.346 19:41:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.346 19:41:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.605 19:41:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.605 [2024-07-24 19:41:30.167908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.863 [2024-07-24 19:41:30.237910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.863 [2024-07-24 19:41:30.237912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.863 [2024-07-24 19:41:30.278307] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.863 [2024-07-24 19:41:30.278345] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.392 19:41:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1878446 /var/tmp/spdk-nbd.sock 00:05:41.393 19:41:32 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1878446 ']' 00:05:41.393 19:41:32 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.393 19:41:32 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.393 19:41:32 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.393 19:41:32 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.393 19:41:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.653 19:41:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.653 19:41:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:41.653 19:41:33 event.app_repeat -- event/event.sh@39 -- # killprocess 1878446 00:05:41.653 19:41:33 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1878446 ']' 00:05:41.653 19:41:33 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1878446 00:05:41.653 19:41:33 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:41.653 19:41:33 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.653 19:41:33 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1878446 00:05:41.653 19:41:33 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.653 19:41:33 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.653 19:41:33 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1878446' 00:05:41.653 killing process with pid 1878446 00:05:41.653 19:41:33 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1878446 00:05:41.653 19:41:33 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1878446 00:05:41.913 spdk_app_start is called in Round 0. 00:05:41.913 Shutdown signal received, stop current app iteration 00:05:41.913 Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 reinitialization... 00:05:41.913 spdk_app_start is called in Round 1. 00:05:41.913 Shutdown signal received, stop current app iteration 00:05:41.913 Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 reinitialization... 00:05:41.913 spdk_app_start is called in Round 2. 00:05:41.913 Shutdown signal received, stop current app iteration 00:05:41.913 Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 reinitialization... 00:05:41.913 spdk_app_start is called in Round 3. 00:05:41.913 Shutdown signal received, stop current app iteration 00:05:41.913 19:41:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:41.913 19:41:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:41.913 00:05:41.913 real 0m16.172s 00:05:41.913 user 0m35.159s 00:05:41.913 sys 0m2.352s 00:05:41.913 19:41:33 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.913 19:41:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.913 ************************************ 00:05:41.913 END TEST app_repeat 00:05:41.913 ************************************ 00:05:41.913 19:41:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:41.913 19:41:33 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:41.913 19:41:33 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.913 19:41:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.913 19:41:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.913 ************************************ 00:05:41.913 START TEST cpu_locks 00:05:41.913 ************************************ 00:05:41.913 19:41:33 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:41.913 * Looking for test storage... 00:05:42.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:42.172 19:41:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:42.172 19:41:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:42.172 19:41:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:42.172 19:41:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:42.172 19:41:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.172 19:41:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.172 19:41:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.172 ************************************ 00:05:42.172 START TEST default_locks 00:05:42.172 ************************************ 00:05:42.172 19:41:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:42.172 19:41:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1881330 00:05:42.172 19:41:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1881330 00:05:42.172 19:41:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.172 19:41:33 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1881330 ']' 00:05:42.172 19:41:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.172 19:41:33 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.172 19:41:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.172 19:41:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.172 19:41:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.172 [2024-07-24 19:41:33.606443] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:42.172 [2024-07-24 19:41:33.606494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1881330 ] 00:05:42.172 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.172 [2024-07-24 19:41:33.662539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.172 [2024-07-24 19:41:33.736817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.109 19:41:34 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.110 19:41:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:43.110 19:41:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1881330 00:05:43.110 19:41:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1881330 00:05:43.110 19:41:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.370 lslocks: write error 00:05:43.370 19:41:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1881330 00:05:43.370 19:41:34 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1881330 ']' 00:05:43.370 19:41:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1881330 00:05:43.370 19:41:34 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:43.370 19:41:34 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.370 19:41:34 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1881330 00:05:43.370 19:41:34 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.370 19:41:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.370 19:41:34 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1881330' 00:05:43.370 killing process with pid 1881330 00:05:43.370 19:41:34 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1881330 00:05:43.370 19:41:34 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1881330 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1881330 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1881330 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1881330 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1881330 ']' 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1881330) - No such process 00:05:43.629 ERROR: process (pid: 1881330) is no longer running 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:43.629 00:05:43.629 real 0m1.625s 00:05:43.629 user 0m1.707s 00:05:43.629 sys 0m0.550s 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.629 19:41:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.629 ************************************ 00:05:43.629 END TEST default_locks 00:05:43.629 ************************************ 00:05:43.629 19:41:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:43.629 19:41:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.629 19:41:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.629 19:41:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.889 ************************************ 00:05:43.889 START TEST default_locks_via_rpc 00:05:43.889 ************************************ 00:05:43.889 19:41:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:43.889 19:41:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1881751 00:05:43.889 19:41:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1881751 00:05:43.889 19:41:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1881751 ']' 00:05:43.889 19:41:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.889 19:41:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.889 19:41:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.889 19:41:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.889 19:41:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.889 19:41:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.889 [2024-07-24 19:41:35.289016] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:43.889 [2024-07-24 19:41:35.289064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1881751 ] 00:05:43.889 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.889 [2024-07-24 19:41:35.340501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.889 [2024-07-24 19:41:35.420350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1881751 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1881751 00:05:44.827 19:41:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.088 19:41:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1881751 00:05:45.088 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1881751 ']' 00:05:45.088 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1881751 00:05:45.088 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:45.088 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.088 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1881751 00:05:45.088 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.088 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.088 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1881751' 00:05:45.088 killing process with pid 1881751 00:05:45.088 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1881751 00:05:45.088 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1881751 00:05:45.348 00:05:45.348 real 0m1.604s 00:05:45.348 user 0m1.685s 00:05:45.348 sys 0m0.512s 00:05:45.348 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.348 19:41:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.348 ************************************ 00:05:45.348 END TEST default_locks_via_rpc 00:05:45.348 ************************************ 00:05:45.348 19:41:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:45.348 19:41:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.348 19:41:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.348 19:41:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.348 ************************************ 00:05:45.348 START TEST non_locking_app_on_locked_coremask 00:05:45.348 ************************************ 00:05:45.348 19:41:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:45.348 19:41:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1882009 00:05:45.348 19:41:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1882009 /var/tmp/spdk.sock 00:05:45.348 19:41:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1882009 ']' 00:05:45.348 19:41:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.348 19:41:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.348 19:41:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.348 19:41:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.348 19:41:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.348 19:41:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.607 [2024-07-24 19:41:36.957342] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:45.607 [2024-07-24 19:41:36.957385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882009 ] 00:05:45.607 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.607 [2024-07-24 19:41:37.009095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.607 [2024-07-24 19:41:37.088840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.208 19:41:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.208 19:41:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:46.208 19:41:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1882165 00:05:46.208 19:41:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:46.208 19:41:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1882165 /var/tmp/spdk2.sock 00:05:46.208 19:41:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1882165 ']' 00:05:46.208 19:41:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.208 19:41:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.208 19:41:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.208 19:41:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.208 19:41:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.208 [2024-07-24 19:41:37.786365] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:46.208 [2024-07-24 19:41:37.786412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882165 ] 00:05:46.467 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.467 [2024-07-24 19:41:37.861945] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.467 [2024-07-24 19:41:37.861973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.467 [2024-07-24 19:41:38.015533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.035 19:41:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.035 19:41:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:47.035 19:41:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1882009 00:05:47.035 19:41:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1882009 00:05:47.035 19:41:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.608 lslocks: write error 00:05:47.608 19:41:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1882009 00:05:47.608 19:41:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1882009 ']' 00:05:47.608 19:41:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1882009 00:05:47.608 19:41:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:47.608 19:41:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.608 19:41:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1882009 00:05:47.608 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.608 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.608 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1882009' 00:05:47.608 killing process with pid 1882009 00:05:47.608 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1882009 00:05:47.608 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1882009 00:05:48.216 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1882165 00:05:48.216 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1882165 ']' 00:05:48.216 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1882165 00:05:48.216 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:48.216 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.216 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1882165 00:05:48.216 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.216 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.216 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1882165' 00:05:48.216 killing process with pid 1882165 00:05:48.216 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1882165 00:05:48.216 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1882165 00:05:48.475 00:05:48.475 real 0m3.071s 00:05:48.475 user 0m3.312s 00:05:48.475 sys 0m0.827s 00:05:48.475 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.475 19:41:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.475 ************************************ 00:05:48.475 END TEST non_locking_app_on_locked_coremask 00:05:48.475 ************************************ 00:05:48.475 19:41:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:48.475 19:41:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.475 19:41:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.475 19:41:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.475 ************************************ 00:05:48.475 START TEST locking_app_on_unlocked_coremask 00:05:48.475 ************************************ 00:05:48.475 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:48.475 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1882517 00:05:48.475 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1882517 /var/tmp/spdk.sock 00:05:48.475 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:48.475 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1882517 ']' 00:05:48.475 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.475 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.475 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.475 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.475 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.734 [2024-07-24 19:41:40.100729] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:48.734 [2024-07-24 19:41:40.100775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882517 ] 00:05:48.734 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.734 [2024-07-24 19:41:40.155067] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.734 [2024-07-24 19:41:40.155094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.734 [2024-07-24 19:41:40.227651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.300 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.300 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:49.559 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:49.559 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1882743 00:05:49.559 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1882743 /var/tmp/spdk2.sock 00:05:49.559 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1882743 ']' 00:05:49.559 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.559 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.559 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.559 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.559 19:41:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.559 [2024-07-24 19:41:40.948251] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:49.559 [2024-07-24 19:41:40.948301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882743 ] 00:05:49.559 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.559 [2024-07-24 19:41:41.022572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.817 [2024-07-24 19:41:41.168814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.384 19:41:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.384 19:41:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:50.384 19:41:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1882743 00:05:50.384 19:41:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1882743 00:05:50.384 19:41:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.642 lslocks: write error 00:05:50.642 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1882517 00:05:50.642 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1882517 ']' 00:05:50.642 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1882517 00:05:50.642 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:50.642 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.642 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1882517 00:05:50.901 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.901 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.901 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1882517' 00:05:50.901 killing process with pid 1882517 00:05:50.901 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1882517 00:05:50.901 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1882517 00:05:51.470 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1882743 00:05:51.470 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1882743 ']' 00:05:51.470 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1882743 00:05:51.470 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:51.470 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.470 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1882743 00:05:51.471 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.471 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.471 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1882743' 00:05:51.471 killing process with pid 1882743 00:05:51.471 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1882743 00:05:51.471 19:41:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1882743 00:05:51.729 00:05:51.729 real 0m3.188s 00:05:51.729 user 0m3.420s 00:05:51.729 sys 0m0.901s 00:05:51.729 19:41:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.729 19:41:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.729 ************************************ 00:05:51.729 END TEST locking_app_on_unlocked_coremask 00:05:51.729 ************************************ 00:05:51.729 19:41:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:51.729 19:41:43 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.729 19:41:43 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.729 19:41:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.729 ************************************ 00:05:51.729 START TEST locking_app_on_locked_coremask 00:05:51.729 ************************************ 00:05:51.729 19:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:51.729 19:41:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1883185 00:05:51.729 19:41:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1883185 /var/tmp/spdk.sock 00:05:51.729 19:41:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.729 19:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1883185 ']' 00:05:51.729 19:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.729 19:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.729 19:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.729 19:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.729 19:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.987 [2024-07-24 19:41:43.357702] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:51.987 [2024-07-24 19:41:43.357744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883185 ] 00:05:51.987 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.987 [2024-07-24 19:41:43.410734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.987 [2024-07-24 19:41:43.491069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1883250 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1883250 /var/tmp/spdk2.sock 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1883250 /var/tmp/spdk2.sock 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1883250 /var/tmp/spdk2.sock 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1883250 ']' 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.921 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.921 [2024-07-24 19:41:44.210318] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:52.921 [2024-07-24 19:41:44.210365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883250 ] 00:05:52.921 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.921 [2024-07-24 19:41:44.281760] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1883185 has claimed it. 00:05:52.921 [2024-07-24 19:41:44.281790] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1883250) - No such process 00:05:53.487 ERROR: process (pid: 1883250) is no longer running 00:05:53.487 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.487 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:53.487 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:53.487 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.487 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:53.487 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.487 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1883185 00:05:53.487 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.487 19:41:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1883185 00:05:53.745 lslocks: write error 00:05:53.745 19:41:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1883185 00:05:53.745 19:41:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1883185 ']' 00:05:53.745 19:41:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1883185 00:05:53.745 19:41:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:53.745 19:41:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.745 19:41:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1883185 00:05:53.745 19:41:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.745 19:41:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.745 19:41:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1883185' 00:05:53.745 killing process with pid 1883185 00:05:53.745 19:41:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1883185 00:05:53.745 19:41:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1883185 00:05:54.003 00:05:54.003 real 0m2.262s 00:05:54.003 user 0m2.515s 00:05:54.003 sys 0m0.585s 00:05:54.003 19:41:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.003 19:41:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.003 ************************************ 00:05:54.003 END TEST locking_app_on_locked_coremask 00:05:54.003 ************************************ 00:05:54.003 19:41:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:54.003 19:41:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.003 19:41:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.003 19:41:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.262 ************************************ 00:05:54.262 START TEST locking_overlapped_coremask 00:05:54.262 ************************************ 00:05:54.262 19:41:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:54.262 19:41:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1883516 00:05:54.262 19:41:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1883516 /var/tmp/spdk.sock 00:05:54.262 19:41:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1883516 ']' 00:05:54.262 19:41:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.262 19:41:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.262 19:41:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.262 19:41:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.262 19:41:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.262 19:41:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:54.262 [2024-07-24 19:41:45.684930] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:54.262 [2024-07-24 19:41:45.684977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883516 ] 00:05:54.262 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.262 [2024-07-24 19:41:45.736717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.262 [2024-07-24 19:41:45.818379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.262 [2024-07-24 19:41:45.818395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.262 [2024-07-24 19:41:45.818397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1883746 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1883746 /var/tmp/spdk2.sock 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1883746 /var/tmp/spdk2.sock 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1883746 /var/tmp/spdk2.sock 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1883746 ']' 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.196 19:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.196 [2024-07-24 19:41:46.540668] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:55.196 [2024-07-24 19:41:46.540715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883746 ] 00:05:55.196 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.196 [2024-07-24 19:41:46.618129] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1883516 has claimed it. 00:05:55.196 [2024-07-24 19:41:46.618166] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1883746) - No such process 00:05:55.762 ERROR: process (pid: 1883746) is no longer running 00:05:55.762 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.762 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:55.762 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:55.762 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1883516 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1883516 ']' 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1883516 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1883516 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1883516' 00:05:55.763 killing process with pid 1883516 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1883516 00:05:55.763 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1883516 00:05:56.022 00:05:56.022 real 0m1.888s 00:05:56.022 user 0m5.349s 00:05:56.022 sys 0m0.388s 00:05:56.022 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.022 19:41:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.022 ************************************ 00:05:56.022 END TEST locking_overlapped_coremask 00:05:56.022 ************************************ 00:05:56.022 19:41:47 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:56.022 19:41:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.022 19:41:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.022 19:41:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.022 ************************************ 00:05:56.022 START TEST locking_overlapped_coremask_via_rpc 00:05:56.022 ************************************ 00:05:56.022 19:41:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:56.022 19:41:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1884004 00:05:56.022 19:41:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1884004 /var/tmp/spdk.sock 00:05:56.022 19:41:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:56.022 19:41:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1884004 ']' 00:05:56.022 19:41:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.022 19:41:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.022 19:41:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.022 19:41:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.022 19:41:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.280 [2024-07-24 19:41:47.637803] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:56.280 [2024-07-24 19:41:47.637847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1884004 ] 00:05:56.280 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.280 [2024-07-24 19:41:47.691171] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.280 [2024-07-24 19:41:47.691198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.280 [2024-07-24 19:41:47.761986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.280 [2024-07-24 19:41:47.762088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.280 [2024-07-24 19:41:47.762091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.846 19:41:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.846 19:41:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:56.846 19:41:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:56.846 19:41:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1884020 00:05:56.846 19:41:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1884020 /var/tmp/spdk2.sock 00:05:56.846 19:41:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1884020 ']' 00:05:56.846 19:41:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.846 19:41:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.846 19:41:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.846 19:41:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.846 19:41:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.105 [2024-07-24 19:41:48.468362] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:57.105 [2024-07-24 19:41:48.468407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1884020 ] 00:05:57.105 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.105 [2024-07-24 19:41:48.543141] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.105 [2024-07-24 19:41:48.543173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.105 [2024-07-24 19:41:48.688793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.106 [2024-07-24 19:41:48.692093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.106 [2024-07-24 19:41:48.692094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.042 [2024-07-24 19:41:49.299115] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1884004 has claimed it. 00:05:58.042 request: 00:05:58.042 { 00:05:58.042 "method": "framework_enable_cpumask_locks", 00:05:58.042 "req_id": 1 00:05:58.042 } 00:05:58.042 Got JSON-RPC error response 00:05:58.042 response: 00:05:58.042 { 00:05:58.042 "code": -32603, 00:05:58.042 "message": "Failed to claim CPU core: 2" 00:05:58.042 } 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1884004 /var/tmp/spdk.sock 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1884004 ']' 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1884020 /var/tmp/spdk2.sock 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1884020 ']' 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.042 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.043 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.043 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.043 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.302 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.302 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:58.302 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:58.302 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.302 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.302 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.302 00:05:58.302 real 0m2.086s 00:05:58.302 user 0m0.854s 00:05:58.302 sys 0m0.164s 00:05:58.302 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.302 19:41:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.302 ************************************ 00:05:58.302 END TEST locking_overlapped_coremask_via_rpc 00:05:58.302 ************************************ 00:05:58.302 19:41:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:58.302 19:41:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1884004 ]] 00:05:58.302 19:41:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1884004 00:05:58.302 19:41:49 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1884004 ']' 00:05:58.302 19:41:49 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1884004 00:05:58.302 19:41:49 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:58.302 19:41:49 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.302 19:41:49 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1884004 00:05:58.302 19:41:49 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.302 19:41:49 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.302 19:41:49 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1884004' 00:05:58.302 killing process with pid 1884004 00:05:58.302 19:41:49 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1884004 00:05:58.302 19:41:49 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1884004 00:05:58.562 19:41:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1884020 ]] 00:05:58.562 19:41:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1884020 00:05:58.562 19:41:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1884020 ']' 00:05:58.562 19:41:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1884020 00:05:58.562 19:41:50 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:58.562 19:41:50 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.562 19:41:50 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1884020 00:05:58.562 19:41:50 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:58.562 19:41:50 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:58.562 19:41:50 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1884020' 00:05:58.562 killing process with pid 1884020 00:05:58.562 19:41:50 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1884020 00:05:58.562 19:41:50 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1884020 00:05:59.130 19:41:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:59.130 19:41:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:59.130 19:41:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1884004 ]] 00:05:59.130 19:41:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1884004 00:05:59.130 19:41:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1884004 ']' 00:05:59.130 19:41:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1884004 00:05:59.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1884004) - No such process 00:05:59.130 19:41:50 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1884004 is not found' 00:05:59.130 Process with pid 1884004 is not found 00:05:59.130 19:41:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1884020 ]] 00:05:59.130 19:41:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1884020 00:05:59.130 19:41:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1884020 ']' 00:05:59.130 19:41:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1884020 00:05:59.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1884020) - No such process 00:05:59.130 19:41:50 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1884020 is not found' 00:05:59.130 Process with pid 1884020 is not found 00:05:59.130 19:41:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:59.130 00:05:59.130 real 0m17.014s 00:05:59.130 user 0m29.363s 00:05:59.130 sys 0m4.807s 00:05:59.130 19:41:50 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.130 19:41:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.130 ************************************ 00:05:59.130 END TEST cpu_locks 00:05:59.130 ************************************ 00:05:59.130 00:05:59.130 real 0m42.047s 00:05:59.130 user 1m20.416s 00:05:59.130 sys 0m8.042s 00:05:59.130 19:41:50 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.130 19:41:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.130 ************************************ 00:05:59.130 END TEST event 00:05:59.130 ************************************ 00:05:59.130 19:41:50 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:59.130 19:41:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.130 19:41:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.130 19:41:50 -- common/autotest_common.sh@10 -- # set +x 00:05:59.130 ************************************ 00:05:59.130 START TEST thread 00:05:59.130 ************************************ 00:05:59.130 19:41:50 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:59.130 * Looking for test storage... 00:05:59.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:59.130 19:41:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.130 19:41:50 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:59.130 19:41:50 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.130 19:41:50 thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.130 ************************************ 00:05:59.130 START TEST thread_poller_perf 00:05:59.130 ************************************ 00:05:59.130 19:41:50 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.130 [2024-07-24 19:41:50.673762] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:05:59.130 [2024-07-24 19:41:50.673832] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1884571 ] 00:05:59.130 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.389 [2024-07-24 19:41:50.732633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.389 [2024-07-24 19:41:50.804366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.389 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:00.324 ====================================== 00:06:00.324 busy:2304766826 (cyc) 00:06:00.324 total_run_count: 407000 00:06:00.324 tsc_hz: 2300000000 (cyc) 00:06:00.324 ====================================== 00:06:00.324 poller_cost: 5662 (cyc), 2461 (nsec) 00:06:00.324 00:06:00.324 real 0m1.227s 00:06:00.324 user 0m1.152s 00:06:00.324 sys 0m0.071s 00:06:00.324 19:41:51 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.324 19:41:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.324 ************************************ 00:06:00.324 END TEST thread_poller_perf 00:06:00.324 ************************************ 00:06:00.324 19:41:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.324 19:41:51 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:00.324 19:41:51 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.324 19:41:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.583 ************************************ 00:06:00.583 START TEST thread_poller_perf 00:06:00.583 ************************************ 00:06:00.583 19:41:51 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.583 [2024-07-24 19:41:51.969268] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:06:00.583 [2024-07-24 19:41:51.969337] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1884817 ] 00:06:00.583 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.583 [2024-07-24 19:41:52.027439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.583 [2024-07-24 19:41:52.099163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.583 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:01.962 ====================================== 00:06:01.962 busy:2301414012 (cyc) 00:06:01.962 total_run_count: 5488000 00:06:01.962 tsc_hz: 2300000000 (cyc) 00:06:01.962 ====================================== 00:06:01.962 poller_cost: 419 (cyc), 182 (nsec) 00:06:01.962 00:06:01.962 real 0m1.220s 00:06:01.962 user 0m1.143s 00:06:01.962 sys 0m0.072s 00:06:01.963 19:41:53 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.963 19:41:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.963 ************************************ 00:06:01.963 END TEST thread_poller_perf 00:06:01.963 ************************************ 00:06:01.963 19:41:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:01.963 00:06:01.963 real 0m2.669s 00:06:01.963 user 0m2.384s 00:06:01.963 sys 0m0.291s 00:06:01.963 19:41:53 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.963 19:41:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.963 ************************************ 00:06:01.963 END TEST thread 00:06:01.963 ************************************ 00:06:01.963 19:41:53 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:01.963 19:41:53 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:01.963 19:41:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.963 19:41:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.963 19:41:53 -- common/autotest_common.sh@10 -- # set +x 00:06:01.963 ************************************ 00:06:01.963 START TEST app_cmdline 00:06:01.963 ************************************ 00:06:01.963 19:41:53 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:01.963 * Looking for test storage... 00:06:01.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:01.963 19:41:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:01.963 19:41:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1885105 00:06:01.963 19:41:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1885105 00:06:01.963 19:41:53 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:01.963 19:41:53 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1885105 ']' 00:06:01.963 19:41:53 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.963 19:41:53 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.963 19:41:53 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.963 19:41:53 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.963 19:41:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.963 [2024-07-24 19:41:53.401484] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:06:01.963 [2024-07-24 19:41:53.401533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1885105 ] 00:06:01.963 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.963 [2024-07-24 19:41:53.453131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.964 [2024-07-24 19:41:53.526702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.900 19:41:54 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.900 19:41:54 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:02.900 19:41:54 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:02.900 { 00:06:02.900 "version": "SPDK v24.09-pre git sha1 3bc1795d3", 00:06:02.900 "fields": { 00:06:02.900 "major": 24, 00:06:02.900 "minor": 9, 00:06:02.900 "patch": 0, 00:06:02.900 "suffix": "-pre", 00:06:02.900 "commit": "3bc1795d3" 00:06:02.900 } 00:06:02.900 } 00:06:02.900 19:41:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:02.900 19:41:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:02.900 19:41:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:02.900 19:41:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:02.900 19:41:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:02.900 19:41:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:02.900 19:41:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:02.900 19:41:54 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.900 19:41:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:02.900 19:41:54 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.900 19:41:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:02.900 19:41:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:02.900 19:41:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.900 19:41:54 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:02.900 19:41:54 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.900 19:41:54 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:02.900 19:41:54 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.900 19:41:54 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:02.900 19:41:54 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.900 19:41:54 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:02.900 19:41:54 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.900 19:41:54 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:02.900 19:41:54 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:02.900 19:41:54 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.161 request: 00:06:03.161 { 00:06:03.161 "method": "env_dpdk_get_mem_stats", 00:06:03.161 "req_id": 1 00:06:03.161 } 00:06:03.161 Got JSON-RPC error response 00:06:03.161 response: 00:06:03.161 { 00:06:03.161 "code": -32601, 00:06:03.161 "message": "Method not found" 00:06:03.161 } 00:06:03.161 19:41:54 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:03.161 19:41:54 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.161 19:41:54 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.161 19:41:54 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.161 19:41:54 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1885105 00:06:03.161 19:41:54 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1885105 ']' 00:06:03.161 19:41:54 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1885105 00:06:03.161 19:41:54 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:03.161 19:41:54 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.161 19:41:54 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1885105 00:06:03.161 19:41:54 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.161 19:41:54 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.161 19:41:54 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1885105' 00:06:03.161 killing process with pid 1885105 00:06:03.161 19:41:54 app_cmdline -- common/autotest_common.sh@969 -- # kill 1885105 00:06:03.161 19:41:54 app_cmdline -- common/autotest_common.sh@974 -- # wait 1885105 00:06:03.420 00:06:03.420 real 0m1.637s 00:06:03.420 user 0m1.952s 00:06:03.420 sys 0m0.403s 00:06:03.420 19:41:54 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.420 19:41:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.420 ************************************ 00:06:03.420 END TEST app_cmdline 00:06:03.420 ************************************ 00:06:03.420 19:41:54 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:03.420 19:41:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.420 19:41:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.420 19:41:54 -- common/autotest_common.sh@10 -- # set +x 00:06:03.420 ************************************ 00:06:03.420 START TEST version 00:06:03.420 ************************************ 00:06:03.420 19:41:54 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:03.679 * Looking for test storage... 00:06:03.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:03.679 19:41:55 version -- app/version.sh@17 -- # get_header_version major 00:06:03.679 19:41:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:03.679 19:41:55 version -- app/version.sh@14 -- # cut -f2 00:06:03.679 19:41:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.679 19:41:55 version -- app/version.sh@17 -- # major=24 00:06:03.679 19:41:55 version -- app/version.sh@18 -- # get_header_version minor 00:06:03.679 19:41:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:03.679 19:41:55 version -- app/version.sh@14 -- # cut -f2 00:06:03.679 19:41:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.679 19:41:55 version -- app/version.sh@18 -- # minor=9 00:06:03.679 19:41:55 version -- app/version.sh@19 -- # get_header_version patch 00:06:03.679 19:41:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:03.679 19:41:55 version -- app/version.sh@14 -- # cut -f2 00:06:03.679 19:41:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.679 19:41:55 version -- app/version.sh@19 -- # patch=0 00:06:03.679 19:41:55 version -- app/version.sh@20 -- # get_header_version suffix 00:06:03.679 19:41:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:03.679 19:41:55 version -- app/version.sh@14 -- # cut -f2 00:06:03.679 19:41:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.679 19:41:55 version -- app/version.sh@20 -- # suffix=-pre 00:06:03.679 19:41:55 version -- app/version.sh@22 -- # version=24.9 00:06:03.679 19:41:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:03.679 19:41:55 version -- app/version.sh@28 -- # version=24.9rc0 00:06:03.680 19:41:55 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:03.680 19:41:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:03.680 19:41:55 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:03.680 19:41:55 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:03.680 00:06:03.680 real 0m0.152s 00:06:03.680 user 0m0.080s 00:06:03.680 sys 0m0.110s 00:06:03.680 19:41:55 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.680 19:41:55 version -- common/autotest_common.sh@10 -- # set +x 00:06:03.680 ************************************ 00:06:03.680 END TEST version 00:06:03.680 ************************************ 00:06:03.680 19:41:55 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:03.680 19:41:55 -- spdk/autotest.sh@202 -- # uname -s 00:06:03.680 19:41:55 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:03.680 19:41:55 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:03.680 19:41:55 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:03.680 19:41:55 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:06:03.680 19:41:55 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:03.680 19:41:55 -- spdk/autotest.sh@264 -- # timing_exit lib 00:06:03.680 19:41:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:03.680 19:41:55 -- common/autotest_common.sh@10 -- # set +x 00:06:03.680 19:41:55 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:03.680 19:41:55 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:03.680 19:41:55 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:06:03.680 19:41:55 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:06:03.680 19:41:55 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:06:03.680 19:41:55 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:06:03.680 19:41:55 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:03.680 19:41:55 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:03.680 19:41:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.680 19:41:55 -- common/autotest_common.sh@10 -- # set +x 00:06:03.680 ************************************ 00:06:03.680 START TEST nvmf_tcp 00:06:03.680 ************************************ 00:06:03.680 19:41:55 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:03.939 * Looking for test storage... 00:06:03.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:03.939 19:41:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:03.939 19:41:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:03.939 19:41:55 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:03.939 19:41:55 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:03.939 19:41:55 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.939 19:41:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.939 ************************************ 00:06:03.939 START TEST nvmf_target_core 00:06:03.939 ************************************ 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:03.939 * Looking for test storage... 00:06:03.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:03.939 19:41:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.940 19:41:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:03.940 ************************************ 00:06:03.940 START TEST nvmf_abort 00:06:03.940 ************************************ 00:06:03.940 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:04.199 * Looking for test storage... 00:06:04.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:04.199 19:41:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:09.470 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:09.471 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:09.471 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:09.471 Found net devices under 0000:86:00.0: cvl_0_0 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:09.471 Found net devices under 0000:86:00.1: cvl_0_1 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:09.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:09.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:06:09.471 00:06:09.471 --- 10.0.0.2 ping statistics --- 00:06:09.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:09.471 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:09.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:09.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:06:09.471 00:06:09.471 --- 10.0.0.1 ping statistics --- 00:06:09.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:09.471 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1888584 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1888584 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1888584 ']' 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.471 19:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.471 [2024-07-24 19:42:00.839983] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:06:09.471 [2024-07-24 19:42:00.840031] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:09.471 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.471 [2024-07-24 19:42:00.899650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.471 [2024-07-24 19:42:00.980750] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:09.471 [2024-07-24 19:42:00.980788] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:09.471 [2024-07-24 19:42:00.980796] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:09.471 [2024-07-24 19:42:00.980803] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:09.471 [2024-07-24 19:42:00.980808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:09.472 [2024-07-24 19:42:00.980911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.472 [2024-07-24 19:42:00.980994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.472 [2024-07-24 19:42:00.980995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.406 [2024-07-24 19:42:01.692890] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.406 Malloc0 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.406 Delay0 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.406 [2024-07-24 19:42:01.766493] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.406 19:42:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:10.406 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.406 [2024-07-24 19:42:01.875371] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:12.938 Initializing NVMe Controllers 00:06:12.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:12.938 controller IO queue size 128 less than required 00:06:12.938 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:12.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:12.938 Initialization complete. Launching workers. 00:06:12.938 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 122, failed: 41656 00:06:12.938 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41716, failed to submit 62 00:06:12.938 success 41660, unsuccess 56, failed 0 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:12.938 rmmod nvme_tcp 00:06:12.938 rmmod nvme_fabrics 00:06:12.938 rmmod nvme_keyring 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1888584 ']' 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1888584 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1888584 ']' 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1888584 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1888584 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1888584' 00:06:12.938 killing process with pid 1888584 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1888584 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1888584 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:12.938 19:42:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.476 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:15.476 00:06:15.476 real 0m10.965s 00:06:15.476 user 0m13.293s 00:06:15.476 sys 0m4.998s 00:06:15.476 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.476 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.476 ************************************ 00:06:15.476 END TEST nvmf_abort 00:06:15.476 ************************************ 00:06:15.476 19:42:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:15.477 ************************************ 00:06:15.477 START TEST nvmf_ns_hotplug_stress 00:06:15.477 ************************************ 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:15.477 * Looking for test storage... 00:06:15.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:15.477 19:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:20.748 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:20.748 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:20.748 Found net devices under 0000:86:00.0: cvl_0_0 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.748 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:20.749 Found net devices under 0000:86:00.1: cvl_0_1 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:20.749 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:20.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:20.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:06:20.749 00:06:20.749 --- 10.0.0.2 ping statistics --- 00:06:20.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.749 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:20.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:20.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:06:20.749 00:06:20.749 --- 10.0.0.1 ping statistics --- 00:06:20.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.749 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1893181 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1893181 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1893181 ']' 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.749 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:20.749 [2024-07-24 19:42:12.313181] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:06:20.749 [2024-07-24 19:42:12.313225] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.749 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.009 [2024-07-24 19:42:12.371286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.009 [2024-07-24 19:42:12.451010] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:21.009 [2024-07-24 19:42:12.451052] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:21.009 [2024-07-24 19:42:12.451060] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:21.009 [2024-07-24 19:42:12.451066] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:21.009 [2024-07-24 19:42:12.451072] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:21.009 [2024-07-24 19:42:12.451115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.009 [2024-07-24 19:42:12.451203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.009 [2024-07-24 19:42:12.451204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.579 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.579 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:21.579 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:21.579 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:21.579 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:21.579 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:21.579 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:21.579 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:21.838 [2024-07-24 19:42:13.311219] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.838 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:22.095 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:22.354 [2024-07-24 19:42:13.706210] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:22.354 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:22.354 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:22.613 Malloc0 00:06:22.613 19:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:22.873 Delay0 00:06:22.873 19:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.873 19:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:23.133 NULL1 00:06:23.133 19:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:23.393 19:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1893562 00:06:23.393 19:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:23.393 19:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:23.393 19:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.393 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.652 19:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.652 19:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:23.652 19:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:23.912 true 00:06:23.912 19:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:23.912 19:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.171 19:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.171 19:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:24.171 19:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:24.468 true 00:06:24.468 19:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:24.468 19:42:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.846 Read completed with error (sct=0, sc=11) 00:06:25.846 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.846 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:25.846 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:25.846 true 00:06:26.104 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:26.104 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.044 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.044 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:27.044 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:27.044 true 00:06:27.044 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:27.044 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.303 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.562 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:27.562 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:27.562 true 00:06:27.822 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:27.822 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.822 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.101 [2024-07-24 19:42:19.547019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.101 [2024-07-24 19:42:19.547910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.547945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.547982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.548973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.549012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.549055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.549093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.549136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.549176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.549212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.549252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.549294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.549337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.549379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.549422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.549462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.549508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.549554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.549597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.550998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.551969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.552010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.552050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.552087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.552127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.552165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.102 [2024-07-24 19:42:19.552202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.552238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.552282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.552322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.552360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.552398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.552430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.552470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.552507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.552549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.552590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.553975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.554959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.555681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.556969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.557004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.103 [2024-07-24 19:42:19.557041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.557984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.558029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.558075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.558119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.558168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.558209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.558249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.558291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.558336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.558375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.558417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.558460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.558496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.558534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.558565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.558600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.558638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.559966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.560978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.561017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.561068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.561109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.561152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.561192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.561235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.561280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.561327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.104 [2024-07-24 19:42:19.561367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.561404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.561440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.561480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.561515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.561551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.561589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.561628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.561667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.561704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.561742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.561786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.561830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.561869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.562998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.563976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.564876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.565357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.565409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.565450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.565490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.565539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.565581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.565621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.565658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.565708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.565751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.565795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.565838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.565882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.565927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.565968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.566014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.566065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.566109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.566153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.105 [2024-07-24 19:42:19.566208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.566988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.567989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.568502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.568550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.568590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.568628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.568667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.568707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.568745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.568776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.568814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.568855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.568895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.568938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.568975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.569013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.569055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.569090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.569127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.569167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.569207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.106 [2024-07-24 19:42:19.569247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.569962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.570982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.571019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.571617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.571659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.571694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.571727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.571770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.571816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.571859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.571904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.571952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.571993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.572968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.573015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.573061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.573105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.573159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.573199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.573244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.573285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.573329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.573371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.573414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.573460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.573498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.573543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.107 [2024-07-24 19:42:19.573586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.573629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.573674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.573717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.573759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.573806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.573848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.573899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.573943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.573982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.574021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.574064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.574101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.574138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.574178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.574214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.574251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.574291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.574339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.574829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.574873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.574918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.574956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.574995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:28.108 [2024-07-24 19:42:19.575943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.575997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:28.108 [2024-07-24 19:42:19.576355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.576960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.577000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.577047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.577092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.577131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.577170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.577213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.577258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.577303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.577348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.577386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.577424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.577466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.577994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.578046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.578089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.578136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.578185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.578229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.578277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.578323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.578367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.578411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.108 [2024-07-24 19:42:19.578449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.578488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.578534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.578570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.578604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.578643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.578687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.578726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.578770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.578810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.578855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.578895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.578941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.578979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.579978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.580973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.581310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.581353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.581392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.581433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.581474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.581513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.581555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.581598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.581640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.581686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.581738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.581783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.581827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.581873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.581919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.581969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.582025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.582074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.582118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.582164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.582210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.582253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.582301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.582349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.109 [2024-07-24 19:42:19.582395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.582440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.582483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.582533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.582577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.582621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.582665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.582714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.582761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.582802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.582848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.582887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.582935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.582973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.583999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.584500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.584549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:28.110 [2024-07-24 19:42:19.584599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.584646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.584680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.584723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.584767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.584805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.584857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.584897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.584937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.584980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.585960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.110 [2024-07-24 19:42:19.586864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.586908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.586953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.586999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.587051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.587097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.587143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.587195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.587242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.587731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.587780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.587827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.587869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.587917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.587962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.588973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.589013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.111 [2024-07-24 19:42:19.589052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.589999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.590032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.590074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.590117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.590157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.590198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.590236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.590278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.590315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.590353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.590400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.590449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.591242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.591283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.591326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.591369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.591416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.591461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.591500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.591539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.591572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.591612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.591653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.591692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.112 [2024-07-24 19:42:19.591732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.591774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.591817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.591855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.591893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.591933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.591973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.592953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.593928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.594983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.595997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.596046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.596092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.113 [2024-07-24 19:42:19.596136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.596948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.597443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.597488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.597527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.597562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.597599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.597641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.597679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.597721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.597759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.597797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.597841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.597880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.597922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.597965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.598999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.599983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.600030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.600081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.600572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.600619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.600664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.600720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.600769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.114 [2024-07-24 19:42:19.600813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.600857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.600903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.600948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.600996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.601966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.602994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.603033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.603084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.603128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.603171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.603217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.603261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.603762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.603810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.603855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.603899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.603944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.603990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.604996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.605037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.605080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.605119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.605168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.605210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.605252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.605293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.605328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.115 [2024-07-24 19:42:19.605370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.605416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.605459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.605501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.605544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.605586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.605633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.605679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.605726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.605769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.605817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.605861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.605906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.605951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.605995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.606045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.606088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.606138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.606182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.606228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.606273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.606322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.606366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.606410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.606455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.606640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.606980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.607961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.608994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.609032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.609080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.609120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.609165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.609205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.609246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.609289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.609329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.609361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.609399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.609435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.609474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.609510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.609548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.610116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.116 [2024-07-24 19:42:19.610167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.610956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.611991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.612891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.613090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.613442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.613487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.613529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.613570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.613613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.613644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.613683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.613726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.613762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.613800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.613838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.613876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.613911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.117 [2024-07-24 19:42:19.613946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.613981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.614994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.615991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.616032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.616077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.616588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.616631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.616662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.616702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.616744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.616793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.616834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.616873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.616909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.616948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.616994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.617968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.618003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.618039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.618086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.618122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.618161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.618204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.618249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.618297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.118 [2024-07-24 19:42:19.618348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.618397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.618445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.618492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.618535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.618580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.618623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.618665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.618714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.618756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.618799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.618847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.618891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.618938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.618983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.619020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.619075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.619112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.619150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.619190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.619232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.619270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.619441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.619880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.619928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.619975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.620992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.621983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.622026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.622070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.622114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.622164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.622211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.622259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.622305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.622352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.622401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.622445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.622490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.622537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.622580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.623079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.623124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.623170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.623217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.623261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.623307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.119 [2024-07-24 19:42:19.623353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.623394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.623436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.623484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.623538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.623584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.623622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.623654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.623698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.623742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.623779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.623822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.623859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.623901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.623945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.623981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.624983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.625828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.626995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.120 [2024-07-24 19:42:19.627886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.627929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.627975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.628900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.629418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.629467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.629511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.629557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.629606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.629650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.629695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.629741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.629788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.629832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.629880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.629924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.629967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.630981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.631968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.632009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.632053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.121 [2024-07-24 19:42:19.632096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.632150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.632196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.632376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.632741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.632791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.632835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.632879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.632924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.632972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.633973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.634980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.635022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.635069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.635111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.635150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.635190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.635231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.635277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.635312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.122 [2024-07-24 19:42:19.635352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.635391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:28.123 [2024-07-24 19:42:19.635954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.636960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.637983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.638943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.639293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.639340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.639385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.639427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.639469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.639513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.639558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.639599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.639644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.639696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.639739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.639779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.639820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.639867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.639910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.639963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.640004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.640040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.640081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.640119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.640162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.640207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.640248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.640287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.640338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.640376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.640417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.640459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.640500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.123 [2024-07-24 19:42:19.640532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.640573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.640609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.640651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.640691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.640730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.640769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.640812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.640851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.640894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.640936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.640977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.641954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.642434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.642482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.642521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.642562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.642604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.642642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.642680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.642716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.642756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.642796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.642838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.642869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.642906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.642947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.642984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.643974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.124 [2024-07-24 19:42:19.644860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.644904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.644950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.644994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.645039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.645083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.645256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.645597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.645643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.645675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.645713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.645751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.645791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.645828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.645868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.645909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.645950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.645991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.646977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.647956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.648000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.648051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.648098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.648143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.648184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.648232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.648727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.648774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.648817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.648864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.648910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.648955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.649000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.649054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.649097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.649130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.649167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.649204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.649245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.649282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.649320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.649357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.649403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.649444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.649493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.649532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.125 [2024-07-24 19:42:19.649573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.649607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.649651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.649689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.649728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.649763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.649804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.649849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.649888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.649927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.649966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.650979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.651024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.651074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.651120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.651167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.651213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.651260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.651305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.651352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.651401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.651457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.651645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.651992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.652968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.126 [2024-07-24 19:42:19.653759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.653798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.653837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.653875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.653920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.653960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.653998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.654037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.654087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.654122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.654163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.654201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.654240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.654282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.654324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.654369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.654405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.654443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.654490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.654531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.655963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.656999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.657040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.657089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.657127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.657166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.657204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.657252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.657294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.127 [2024-07-24 19:42:19.657333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.657371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.657410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.657454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.657499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.657541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.657582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.657623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.657664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.657702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.657741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.657783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.657961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.658004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.658355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.658400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.658445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.658494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.658538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.658584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.658631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.658674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.658717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.658762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.658808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.658851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.658891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.658936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.658977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.659985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.660967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.661013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.661508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.661560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.661593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.661633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.661668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.661705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.661750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.661791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.661829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.661866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.661908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.661950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.661991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.662028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.662072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.662109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.662146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.662182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.662224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.128 [2024-07-24 19:42:19.662259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.662981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.663981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.664020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.664074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.664114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.664155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.664330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.664745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.664790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.664836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.664881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.664925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.664968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.665982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.129 [2024-07-24 19:42:19.666723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.666761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.666803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.666845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.666891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.666930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.666968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.667007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.667050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.667096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.667134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.667173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.667211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.667249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.667291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.667333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.667372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.667409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.667897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.667946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.667999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.668981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.669955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.670829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.671197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.671237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.671274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.130 [2024-07-24 19:42:19.671313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.671356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.671398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.671441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.671480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.671526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.671570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.671616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.671660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.671697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.671729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.671766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.671810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.671850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.671891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.671930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.671965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.672963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.673867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.674393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.674441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.674489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.674537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.674590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.674635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.674678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.674724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.674769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.674816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.674870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.674917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.674965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.131 [2024-07-24 19:42:19.675813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.675858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.675904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.675952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.676980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.677021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.677062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.677104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.677143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.677182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.677219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.677256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.677785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.677829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.677871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.677912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.677953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.677995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.678963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.132 [2024-07-24 19:42:19.679895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.679938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.679983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.680028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.680077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.680120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.680167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.680215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.680265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.680311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.680353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.680400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.680445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.680489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.680536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.680581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.681077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.681129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.681176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.681220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.681270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.681312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.681360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.681404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.681450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.681494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.681544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.681584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.681622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.133 [2024-07-24 19:42:19.681671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.681710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.681751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.681800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.681838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.681880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.681915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.681961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.682984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.683860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.684326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.684374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.684407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.684449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.684488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.684530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.684567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.684615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.684658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.684701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.684742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.684788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.684835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.684880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.684929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:28.418 [2024-07-24 19:42:19.684974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.685982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.686966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.687014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.687058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.687096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.687561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.687608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.687651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.687694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.687733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.687773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.687813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.687856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.687895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.687946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.687987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.688959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.689004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.689055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.689101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.689151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.689199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.689246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.689292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.418 [2024-07-24 19:42:19.689342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.689384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.689421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.689463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.689505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.689543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.689580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.689624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.689660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.689700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.689744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.689786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.689827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.689874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.689914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.689954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.689998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.690037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.690077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.690119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.690159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.690200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.690239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.690277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.690320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.690817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.690868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.690913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.690960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.691995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.692960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.693012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.693062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.693108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.693152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.693197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.693248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.693295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.693338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.693385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.693432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.693489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.693533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.693581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.694978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.695969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.696980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.697375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.697420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.697457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.697498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.697537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.697576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.697619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.697664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.697714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.697762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.697806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.697851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.697899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.697946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.697992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.698039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.698093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.698140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.698188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.698240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.698286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.419 [2024-07-24 19:42:19.698332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.698378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.698425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.698473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.698524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.698572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.698615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.698659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.698709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.698750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.698795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.698842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.698887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.698926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.698965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.699965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.700006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.700053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.700102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.700594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.700645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.700689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.700734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.700788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.700834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.700878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.700925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.700975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.701959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.702960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.703008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.703056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.703104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.703149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.703197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.703251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.703299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.703345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.703392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.703561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.703904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.703950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.703990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.704968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.705011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.705062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.705107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.705151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.705197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.705249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.705297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.705343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.705390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.705433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.420 [2024-07-24 19:42:19.705480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.705524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.705567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.705615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.705660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.705708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.705747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.705796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.705835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.705878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.705921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.705966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.706008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.706041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.706088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.706128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.706169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.706209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.706245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.706285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.706327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.706368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.706409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.706451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.706494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.706992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.707963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.708961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.709967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.710962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.711986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.712979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.713022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.713548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.713592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.713633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.713671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.713715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.713760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.713802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.713848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.713895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.713945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.713988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.714038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.714093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.714137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.714182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.714230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.714286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.714333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.714377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.421 [2024-07-24 19:42:19.714426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.714475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.714522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.714568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.714616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.714667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.714711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.714758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.714802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.714848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.714892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.714938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.714987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.715980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.716026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.716063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.716105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.716137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.716166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.716198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.716229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.716261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.716441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.716790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.716841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.716885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.716931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.716980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.717965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.718998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.719047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.719096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.719143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.719187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.719239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.719286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.719334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.719376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.719417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.719457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.719501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.719536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.720987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.721985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.722961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.723389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.422 [2024-07-24 19:42:19.723434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.723479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.723527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.723572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.723617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.723663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.723711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.723756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.723805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.723857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.723901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.723946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.723994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.724961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.725981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.726022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.726072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.726115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.726155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.726187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.726229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.726712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.726754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.726796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.726835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.726875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.726916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.726957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.727976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.728971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.729009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.729048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.729092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.729141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.729190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.729234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.729278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.729323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.729382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.729429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.729612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.729965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.730999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.731048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.731095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.731144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.423 [2024-07-24 19:42:19.731189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.731973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.732020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.732066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.732112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.732162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.732211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.732259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.732302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.732346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.732393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.732438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.732484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.732524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.732571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.732608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.732647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.732693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.733961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.734999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.735881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.736378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.736424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.736471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:28.424 [2024-07-24 19:42:19.736515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.736561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.736606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.736651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.736699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.736743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.736784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.736833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.736879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.736921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.736966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.737983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.738988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.739036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.739087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.739565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.739611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.739653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.739691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.739733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.739771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.739809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.739846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.739892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.739933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.424 [2024-07-24 19:42:19.739969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 true 00:06:28.425 [2024-07-24 19:42:19.740940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.740987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.741988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.742025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.742073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.742114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.742160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.742198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.742240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.742278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.742805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.742849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.742888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.742928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.742969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.743964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.744972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.745013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.745058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.745101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.745139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.745177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.745209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.745246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.745284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.745324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.745364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.745403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.745443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.745479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.745526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.746981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.747022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.747065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.747105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.747143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.747185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.747215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.747255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.747300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.747346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.747397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.747438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.747486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.747529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.425 [2024-07-24 19:42:19.747573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.747625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.747670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.747715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.747759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.747805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.747858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.747907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.747953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.747998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.748776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.749991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.750961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.751994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.752530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.752579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.752625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.752670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.752718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.752760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.752807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.752850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.752895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.752941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.752986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.753985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.754959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.755004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.755051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.755101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.755147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.755192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.755238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.755283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.755765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.755809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.755849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.755888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.755933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.755973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.756012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.756055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.756101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.756141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.756183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.756215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.756258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.756295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.756334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.756374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.756420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.756461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.756508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.426 [2024-07-24 19:42:19.756545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.756587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.756635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.756669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.756707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.756744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.756782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.756824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.756865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.756905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.756947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.756985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.757989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.758037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.758086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.758133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.758173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.758204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.758243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.758282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.758329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.758372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.758413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.758452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.758931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.758981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.759989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.760984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.761030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.761080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.761125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.761178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.761223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.761270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.761314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.761357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.761402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.761455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.761500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.761545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.761585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.761627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.761677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.762998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.763960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:28.427 [2024-07-24 19:42:19.764630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.764931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.427 [2024-07-24 19:42:19.764984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.765020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.765062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.765542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.765589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.765627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.427 [2024-07-24 19:42:19.765664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.765707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.765748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.765788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.765828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.765872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.765913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.765956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.765999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.766964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.767965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.768004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.768047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.768089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.768127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.768166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.768213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.768725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.768774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.768818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.768864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.768915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.768958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.769996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.770969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.771005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.771054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.771108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.771150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.771194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.771239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.771296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.771345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.771389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.771432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.771639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.771990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.772963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.773004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.773048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.773091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.773136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.773180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.773225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.773267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.773318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.773363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.428 [2024-07-24 19:42:19.773411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.773459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.773507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.773560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.773604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.773649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.773696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.773741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.773785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.773832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.773873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.773920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.773978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.774021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.774074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.774125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.774170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.774212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.774257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.774301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.774347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.774389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.774435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.774473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.774506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.774546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.774590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.774630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.774670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.775996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.776968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.777891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.778089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.778432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.778481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.778525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.778568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.778616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.778661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.778706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.778751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.778791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.778835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.778877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.778924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.778972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.779983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.780968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.781017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.781064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.781112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.781594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.781639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.781679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.781716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.781756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.781794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.781832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.781876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.781911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.781949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.781992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.782034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.782080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.782116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.782153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.782190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.782227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.782269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.429 [2024-07-24 19:42:19.782310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.782347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.782390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.782430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.782469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.782507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.782543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.782585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.782618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.782656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.782695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.782733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.782777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.782821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.782867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.782915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.782963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.783991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.784033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.784076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.784109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.784148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.784190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.784226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.784411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.784458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.784831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.784879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.784921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.784964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.785989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.786977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.787014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.787052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.787094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.787141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.787180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.787224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.787268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.787311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.787350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.787391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.787431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.787481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:28.430 [2024-07-24 19:42:19.787973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.788977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.789990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.790035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.790082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.790124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.790169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.790217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.790262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.790307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.790353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.790401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.790455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.790498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.430 [2024-07-24 19:42:19.790543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.790589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.790643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.790683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.790721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.790906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.791994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.792967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.793872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.794374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.794421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.794471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.794515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.794561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.794606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.794652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.794697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.794740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.794786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.794830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.794867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.794908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.794943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.794982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.795961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.796968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.797015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.797060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.797103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.797297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.797648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.797695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.797739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.797782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.797828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.797865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.797906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.797944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.797983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.798014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.798060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.798100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.798137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.798178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.798220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.798259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.798299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.798336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.798378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.431 [2024-07-24 19:42:19.798419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.798461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.798491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.798534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.798573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.798612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.798665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.798711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.798757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.798799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.798849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.798891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.798937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.798980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.799998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.800039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.800083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.800115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.800157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.800199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.800236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.800272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.800847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.800894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.800943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.800990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.801964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.802993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.803038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.803087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.803124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.803163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.803203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.803244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.803282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.803326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.803367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.803408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.803448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.803485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.803532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.803578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.803767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.803813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.804996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.805975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.806875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.807411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.807457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.807496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.807528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.807569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.807609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.807647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.807690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.432 [2024-07-24 19:42:19.807731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.807767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.807805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.807843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.807883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.807924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.807969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.808983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.809963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.810016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.810063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.810108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.810298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.810669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.810719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.810762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.810806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.810850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.810895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.810940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.810989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.811971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.812976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.813025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.813075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.813123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.813170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.813223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.813268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.813316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.813362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.813840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.813888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.813931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.813968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.814961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.815009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.815060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.815109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.815165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.815212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.433 [2024-07-24 19:42:19.815258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.815303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.815350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.815397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.815443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.815485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.815535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.815598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.815640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.815683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.815725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.815771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.815814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.815853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.815896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.815935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.815966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.816004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.816046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.816089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.816132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.816171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.816218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.816257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.816297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.816342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.816383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.816425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.816469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.816516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.816704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.817985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.818991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.819744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.820264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.820312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.820367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.820412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.820459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.820504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.820549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.820599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.820646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.820694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.820742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.820796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.820842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.820889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.820930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.820972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.821954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.822988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.823028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.823543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.823597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.823641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.823685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.823734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.823778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.823838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.823885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.823928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.823971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.824019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.824066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.824114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.434 [2024-07-24 19:42:19.824159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.824996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.825986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.826025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.826068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.826109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.826152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.826197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.826237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.826274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.826320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.826801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.826849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.826892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.826942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.826990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.827989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.828979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.829029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.829079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.829124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.829171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.829216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.829264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.829308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.829361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.829404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.829447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.829493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.829526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.829567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.830987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.831968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.832014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.832061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.832108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.832153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.832194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.832242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.832292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.832338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.832385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.832427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.832472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.435 [2024-07-24 19:42:19.832516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.832560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.832603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.832650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.832691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.832729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.833972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.834986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.835951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.836447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.836485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.836523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.836560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.836605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.836650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.836691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.836728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.836767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.836806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.836850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.836887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.836929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.836967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.837967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.838970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.839014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.839055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.839102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.839142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.839177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:28.436 [2024-07-24 19:42:19.839635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.839677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.839718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.839757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.839798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.839839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.839880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.839923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.839968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.436 [2024-07-24 19:42:19.840795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.840838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.840883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.840923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.840968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.841962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.842009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.842059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.842106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.842151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.842204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.842247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.842286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.842319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.842361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.842911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.842960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.843990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.844978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.845025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.845070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.845119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.845166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.845214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.845260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.845307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.845359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.845404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.845450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.845496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.845542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.845589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.845635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.845682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.845727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.846961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.847974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.848897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.849372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.849423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.849466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.849507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.437 [2024-07-24 19:42:19.849550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.849596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.849636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.849672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.849707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.849745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.849785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.849826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.849867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.849909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.849950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.849996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.850976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.851988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.852030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.852510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.852561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.852607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.852649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.852696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.852741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.852795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.852839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.852886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.852935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.852984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.853975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.854996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.855038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.855082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.855122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.855167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.855212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.855254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.855447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.855820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.855870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.855914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.855963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.438 [2024-07-24 19:42:19.856954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.856999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.857992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.858037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.858087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.858130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.858175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.858223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.858269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.858313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.858353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.858385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.858422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.858462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.858498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.858542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.859967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.860987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.861743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.862285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.862328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.862365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.862406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.862457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.862497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.862538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.862582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.862622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.862673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.862718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.862771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.862817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.862864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.862917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.862963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.863999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.864955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.865001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.865049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.865097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.865144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.865625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.865668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.865709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.865754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.865794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.865834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.865875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.439 [2024-07-24 19:42:19.865914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.865950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.865995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.866988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.867998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.868041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.868082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.868122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.868160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.868198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.868240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.868283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.868323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.868363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.868407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.868906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.868952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.869976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.870996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.871039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.871090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.871135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.871179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.871228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.871274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.871319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.871368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.871415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.871461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.871507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.871553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.871603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.871651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.872969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.873998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.874054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.874104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.874151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.874200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.874246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.874294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.874342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.874374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.874415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.874455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.874498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.874537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.874578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.440 [2024-07-24 19:42:19.874627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.874667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.874707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.874748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.874791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.874832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.875026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.875437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.875486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.875530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.875583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.875628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.875672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.875722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.875767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.875816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.875857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.875910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.875957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.875999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.876982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.877968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.878016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.878063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.878111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.878158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.878206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.878695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.878742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.878787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.878844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.878885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.878929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.878976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.879983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.880982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.881024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.881072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.881117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.881163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.881209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.881256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.881290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.881336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.881374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.881412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.881895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.881941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.881986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.441 [2024-07-24 19:42:19.882861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.882904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.882951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.882990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.883966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.884013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.884062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.884117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.884161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.884204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.884250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.884302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.884350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.884400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.884447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.884494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.884545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.884588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.884631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.884680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.885964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.886967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.887940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.888127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.888470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.888513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.888551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.888594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.888633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.888674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.888714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.888760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:28.442 [2024-07-24 19:42:19.888801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.888840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.888873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.888912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.888954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.888993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.889990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.890999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.891053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.891101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.891148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.891646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.891695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.891742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.891796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.891840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.442 [2024-07-24 19:42:19.891887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.891933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.891975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.892970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.893976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.894026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.894072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.894112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.894152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.894190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.894230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.894270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.894308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.894348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.894388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.894428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.894467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.894645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.895977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.896975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.897021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.897069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.897113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.897158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.897200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.897240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.897285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.897327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.897366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.897892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.897935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.897978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.443 [2024-07-24 19:42:19.898755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.898798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.898851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.898894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.898945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.898991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.899982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.900961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.901963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.902979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.903764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.904988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.905968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.906014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.906063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.906110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.906157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.906201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.906245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.906292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.906338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.906384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.906430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.444 [2024-07-24 19:42:19.906484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.906529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.906573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.906615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.906669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.906713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.906761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.906807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.906849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.906901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.906947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.906993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.907038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.907092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.907286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.907338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.907699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.907745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.907788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.907834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.907866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.907905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.907946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.907988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.908990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.909993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.910460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.910507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.910553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.910597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.910651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.910693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.910736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.910783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.910830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.910878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.910922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.910964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.911958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.912993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.913046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.913095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.913138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.913186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.913233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.913277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.913324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.913504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.913549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.913905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.913947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.913987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.914991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.915036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.915090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.915134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.915181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.915230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.915272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.915318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.915361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.915407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.915457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.915500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.915546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.445 [2024-07-24 19:42:19.915587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.915636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.915679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.915724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.915762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.915804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.915851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.915885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.915923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.915961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.915999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.916047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.916088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.916132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.916172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.916218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.916256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.916297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.916336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.916375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.916417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.916455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.916493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.916539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.917964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.918965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.919999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.920391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.920435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.920474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.920516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.920558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.920599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.920643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.920676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.920717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.920761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.920806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.920853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.920899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.920946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.920998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.921994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.922997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 [2024-07-24 19:42:19.923033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.446 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.729 [2024-07-24 19:42:20.116437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.116507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.116555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.116602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.116644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.116687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.116728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.116770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.116812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.116854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.116900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.116943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.116984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.117995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.118963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.119012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.119061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.119585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.119640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.119676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.119727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.119767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.119802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.119844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.119884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.119921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.119955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.119987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.730 [2024-07-24 19:42:20.120689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.120729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.120769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.120808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.120849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.120890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.120930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.120967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.121972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.122020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.122069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.122108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.122143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.122648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.122694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.122731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.122770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.122814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.122855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.122897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.122934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.122976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.123980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.124973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.125011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.125053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.125092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.731 [2024-07-24 19:42:20.125130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.125167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.125204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.125241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.125733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.125775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.125813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.125851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.125887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.125929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.125969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:28.732 [2024-07-24 19:42:20.126843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.126968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.127980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.128021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.128066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.128108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.128157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.128212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.128252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.128291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.128762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.128813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.128855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.128899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.128941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.128985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.732 [2024-07-24 19:42:20.129747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.129782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.129820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.129854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.129891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.129929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.129967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.130990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.131037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.131087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.131130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.131172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.131220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.131259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.131300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.131791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.131826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.131866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.131904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.131944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.131982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.132980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.133964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.733 [2024-07-24 19:42:20.134002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.134045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.134084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.134121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.134157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.134204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.134244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.134275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.134313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.134353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.134906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.134953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.135981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.136973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.137019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.137065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.137108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.137151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.137196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.734 [2024-07-24 19:42:20.137237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.137282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.137329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.137368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.137415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.137452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.137493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.137530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.138976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.139963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.140751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.141240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.141293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.141336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.141381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.141424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.141466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.141514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.141559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.141603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.141649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.141689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.141733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.735 [2024-07-24 19:42:20.141777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.141818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.141870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.141911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.141956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.142979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.143808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 19:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:28.736 [2024-07-24 19:42:20.144787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.144979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 19:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:28.736 [2024-07-24 19:42:20.145143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.145996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.146040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.146097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.736 [2024-07-24 19:42:20.146143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.146989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.147029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.147499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.147543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.147582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.147620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.147661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.147700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.147747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.147789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.147828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.147868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.147908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.147952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.147999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.148979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.149982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.150025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.150072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.150119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.150162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.150200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.150234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.150275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.150811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.150856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.737 [2024-07-24 19:42:20.150902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.150957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.151983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.152982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.153026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.153079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.153125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.153169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.153216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.153261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.153302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.153353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.153398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.153444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.153491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.153541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.153585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.154990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.155028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.155073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.155113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.155155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.155196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.155241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.155283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.738 [2024-07-24 19:42:20.155327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.155368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.155407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.155450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.155491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.155537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.155581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.155635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.155679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.155726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.155772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.155822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.155870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.155914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.155960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.156779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.157611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.157656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.157697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.157740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.157776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.157826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.157871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.157914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.157960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.739 [2024-07-24 19:42:20.158902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.158947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.158995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.159991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.740 [2024-07-24 19:42:20.160843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.160887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.160935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.160981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.161024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.161075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.161127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.161171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.161214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.161259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.161771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.161818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.161863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.161907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.161954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.162977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.163987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.164963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.165006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.165054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.165104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.165148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.165197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.165230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.165269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.165310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.165347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.165384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.741 [2024-07-24 19:42:20.165422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.165465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.165510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.165548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.165587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.165625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.165662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.165699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.165736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.165773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.165813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.165851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.165891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.165929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.165967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.166971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.167015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.167072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.167117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.167164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.167210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.167261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.167310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.167356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.167404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.167899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.167944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.167988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.168973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.169997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.742 [2024-07-24 19:42:20.170047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.170094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.170136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.170182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.170215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.170255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.170296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.170334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.170376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.170413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.170457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.170501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.170543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.170714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.171982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.172963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.173859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.174351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.174404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.174447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.174491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.174535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.174579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.174623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.174669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.174714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.174759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.174805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.743 [2024-07-24 19:42:20.174847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.174886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.174930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.174973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.175998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.176971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.177017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.177060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.177104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:28.744 [2024-07-24 19:42:20.177594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.177639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.177680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.177720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.177758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.177795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.177834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.177875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.177924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.177972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.178962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.179004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.179046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.179086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.179126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.179167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.179212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.179252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.179288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.179330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.179368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.744 [2024-07-24 19:42:20.179407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.179450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.179485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.179526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.179569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.179613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.179654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.179703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.179743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.179782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.179827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.179875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.179921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.179966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.180014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.180193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.180248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.180292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.180339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.180385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.180428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.180479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.180974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.181963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.182999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.183046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.183091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.183144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.183187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.183232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.745 [2024-07-24 19:42:20.183277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.183323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.183378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.183422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.183463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.183515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.183562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.183610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.183656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.183699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.183754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.184997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.185966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.186894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.187487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.187532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.187574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.187616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.187663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.187707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.187753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.187802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.187851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.187895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.187938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.187983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.188034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.188080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.746 [2024-07-24 19:42:20.188135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.188962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.189987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.190031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.190080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.190122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.190162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.190202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.190251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.190292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.190332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.190804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.190843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.190880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.190924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.190955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.190994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.191960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.192001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.192053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.192100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.192146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.192194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.192237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.192282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.192328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.192377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.192418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.192455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.192503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.192542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.747 [2024-07-24 19:42:20.192584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.192635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.192674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.192712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.192751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.192792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.192830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.192867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.192908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.192945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.192993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.193033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.193074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.193113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.193154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.193200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.193240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.193279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.193318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.193348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.193391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.193430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.193925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.193974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.194990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.195970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.196008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.196053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.196099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.196141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.196180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.196219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.196265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.196308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.196345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.196389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.196424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.196463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.196504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.196545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.196586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.197063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.197114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.197158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.197206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.197248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.197293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.197339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.197383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.197428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.197474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.748 [2024-07-24 19:42:20.197530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.197575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.197621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.197667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.197710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.197755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.197800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.197847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.197890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.197938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.197994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.198997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.199832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.200338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.200385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.200434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.200477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.200520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.200565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.200615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.200661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.200706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.200739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.200778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.200821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.200859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.200903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.200949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.200989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.749 [2024-07-24 19:42:20.201956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.201996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.202933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.203413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.203461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.203503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.203546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.203586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.203629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.203671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.203707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.203749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.203786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.203833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.203864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.203902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.203939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.203974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.204985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.750 [2024-07-24 19:42:20.205777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.205835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.205881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.205924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.205973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.206023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.206072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.206117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.206164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.206663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.206701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.206738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.206775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.206813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.206851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.206891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.206928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.206974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.207959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.208970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.209008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.209051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.209092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.209140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.209182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.209221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.209260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.209815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.209863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.209904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.209950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.209996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.751 [2024-07-24 19:42:20.210829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.210876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.210920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.210969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.211998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.212038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.212081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.212120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.212163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.212204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.212245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.212289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.212336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.212382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.212430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.212477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.212520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.212562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.213999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.214994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.215041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.215094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.215137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.215187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.752 [2024-07-24 19:42:20.215236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.215282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.215329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.215372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.215421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.215473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.215519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.215567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.215616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.215665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.215709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.215752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.215792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.216964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.217988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.218987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.219456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.219497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.219539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.219580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.219621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.219660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.219701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.219744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.219783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.219825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.219867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.219914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.219959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.220004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.220051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.220096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.220141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.220187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.220234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.220280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.753 [2024-07-24 19:42:20.220326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.220373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.220417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.220463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.220519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.220563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.220609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.220656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.220698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.220743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.220790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.220833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.220879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.220935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.220981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.221966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.222007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.222049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.222095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.222136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.222174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.222218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.222255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.222715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.222757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.222794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.222836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.222875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.222913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.222954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.222992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.754 [2024-07-24 19:42:20.223971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.224965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.225007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.225054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.225100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.225147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.225196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.225241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.225285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.225332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.225828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.225874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.225925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.225970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:28.755 [2024-07-24 19:42:20.226017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.226981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.227964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.228011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.228059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.228103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.228147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.228199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.228245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.228290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.228337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.228379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.228426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.228469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.755 [2024-07-24 19:42:20.228512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.228560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.228753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.229978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.230967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.231746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.232981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.233022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.233062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.233102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.233134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.233174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.233215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.233254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.233293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.233331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.233379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.233420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.756 [2024-07-24 19:42:20.233459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.233497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.233543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.233587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.233620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.233671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.233710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.233748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.233791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.233831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.233871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.233917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.233955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.233993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.234920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.235111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.235462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.235509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.235540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.235579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.235622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.235659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.235697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.235736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.235785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.235824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.235864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.235902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.235947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.235990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.236990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.237041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.237088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.237134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.237180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.237234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.237278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.237322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.237368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.237430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.237475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.237520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.237565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.237610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.237657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.757 [2024-07-24 19:42:20.237701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.237750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.237793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.237839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.237885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.237935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.237982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.238029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.238077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.238125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.238621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.238665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.238711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.238751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.238792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.238834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.238875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.238913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.238952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.238993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.239966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.240959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.241006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.241046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.241082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.241121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.241162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.241204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.241246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.241428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.241478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.241832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.241883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.241928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.241969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.242015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.242063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.242109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.242156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.242202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.242250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.242301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.242345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.242389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.242435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.242489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.242532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.242593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.242638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.242681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.758 [2024-07-24 19:42:20.242730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.242780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.242820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.242863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.242904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.242944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.242988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.243998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.244038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.244083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.244121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.244168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.244208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.244252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.244298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.244348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.244397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.244445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.244492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.244993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.245962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.759 [2024-07-24 19:42:20.246807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.246851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.246902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.246949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.246995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.247963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.248959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.249996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.250928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.251467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.251517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.251562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.251608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.251653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.251697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.251744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.251791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.251838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.760 [2024-07-24 19:42:20.251886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.251935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.251981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.252999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.253965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.254012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.254066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.254114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.254159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.254204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.254250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.254435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.254783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.254834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.254875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.254919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.254970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.255994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.761 [2024-07-24 19:42:20.256999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.257047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.257092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.257138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.257182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.257232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.257280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.257325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.257370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.257422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.257470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.257511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.257997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.258988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.259973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.260899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.261985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.762 [2024-07-24 19:42:20.262802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.262841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.262886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.262930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.262968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.263979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.264477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.264525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.264561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.264600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.264637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.264677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.264723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.264762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.264802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.264841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.264882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.264921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.264959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.265954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.266987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.267024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.763 [2024-07-24 19:42:20.267066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.267110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.267154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.267631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.267682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.267729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.267774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.267826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.267870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.267919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.267966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.268988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.269993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.270039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.270091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.270136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.270188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.270232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.270281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.270326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.270378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.270877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.270928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.270968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.271983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.272024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.272072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.272121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.272166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.272208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.272254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.272303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.272349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.272395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.272447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.272490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.272533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.272579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.272625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.272668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.764 [2024-07-24 19:42:20.272714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.272762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.272808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.272855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.272903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.272946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.272993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.273037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.273089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.273138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.273186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.273233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.273278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.273325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.273375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.273422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.273467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.273511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.273549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.273589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.273629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.273817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.274956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.275979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.276806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:28.765 [2024-07-24 19:42:20.277323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.277373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.277420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.277468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.277517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.277563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.277617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.277663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.277710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.277754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.277798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.277852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.277902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.277950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.277993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.765 [2024-07-24 19:42:20.278806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.278853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.278897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.278944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.278994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.279969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.280007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.280051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.280094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.280137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.280178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.280348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.280750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.280791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.280828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.280875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.280921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.280965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.281988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.282990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.283038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.283087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.283138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.283183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.283229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.283274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.283319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.283364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.283413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.283900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.283948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.283989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.766 [2024-07-24 19:42:20.284934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.284985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.285987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.286992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.287975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.288984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.767 [2024-07-24 19:42:20.289883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.289926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.289962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.290857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.290920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.290967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.291998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.292976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.293996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.294982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.295963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.296002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.296051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.296091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.296126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.296164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.296205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.296250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.296293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.768 [2024-07-24 19:42:20.296337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.296377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.296410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.296446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.296488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.296527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.296569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.296610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.297995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.298986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.299851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.300986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.301995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.302041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.302090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.302135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.302186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.302226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.302267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.302317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.302359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.302404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.302445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.769 [2024-07-24 19:42:20.302490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.302533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.302580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.302613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.302652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.302694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.302736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.302773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.302812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.302854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.302902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.302944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.302985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.303528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.303577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.303633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.303678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.303727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.303773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.303818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.303865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.303914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.303956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.304988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.305993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:28.770 [2024-07-24 19:42:20.306035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.306082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.306123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.306167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.306209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.306247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.306293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.306323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.306367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.306546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.306592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.306952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.306993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.307985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.308965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.309009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.309057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.309100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.309139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.309182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.036 [2024-07-24 19:42:20.309224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.309265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.309306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.309875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.309919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.309961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 true 00:06:29.037 [2024-07-24 19:42:20.310833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.310981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.311965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.312982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.313970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.314012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.314057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.314095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.314134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.314173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.314211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.314244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.314288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.314318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.314347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.314386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.314429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.037 [2024-07-24 19:42:20.314469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.314511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.314546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.314577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.314605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.314634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.314663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.314704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.314734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.314764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.314793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.314822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.314856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.314886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.314916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.314944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.314974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.315613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.316982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.317973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.038 [2024-07-24 19:42:20.318914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.318968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.319156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.319202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.319243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.319597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.319645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.319702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.319748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.319787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.319825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.319864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.319910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.319952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.319990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.320970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.321928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.322984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.323014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.323047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.323078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.323107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.323136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.323165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.323195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.323224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.323252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.323287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.323330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.323369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.323409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.323450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.039 [2024-07-24 19:42:20.323487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.323533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.323578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.323624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.323672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.323722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.323770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.323812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.323859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.323903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.323955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.323999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.324887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.325069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.325109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.325148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.325188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.325538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.325581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.325617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.325657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.325698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.325740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.325783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.325828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.325870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.325920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.325963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.326958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.327004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.327056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.327102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.327146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.040 [2024-07-24 19:42:20.327192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.327962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.328000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.328048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.328089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.328134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.328177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.328210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:29.041 [2024-07-24 19:42:20.328700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.328742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.328782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.328821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.328866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.328903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.328941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.328984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.329987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.330944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.331128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.331179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.331222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.331892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.331940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.331991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.332047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.332093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.041 [2024-07-24 19:42:20.332140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.332983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.333984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.334997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 19:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:29.042 [2024-07-24 19:42:20.335565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 19:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.042 [2024-07-24 19:42:20.335927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.335977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.336019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.336063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.336109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.336146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.336188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.336226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.336262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.336304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.336340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.336389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.042 [2024-07-24 19:42:20.336430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.336469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.336511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.336548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.336584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.336622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.336667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.336706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.336748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.336787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.336832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.336879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.336927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.336983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.337027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.337079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.337127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.337175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.337219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.337264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.337307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.337360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.337406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.337449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.337496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.337546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.337594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.337642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.337693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.338994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.339991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.340989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.341175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.341221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.341262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.341305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.341358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.043 [2024-07-24 19:42:20.341403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.341445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.341491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.341537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.341583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.341629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.341682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.341728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.342388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.342431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.342478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.342523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.342563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.342605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.342645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.342686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.342728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.342773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.342815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.342848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.342886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.342925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.342973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.343965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.344960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.345981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.346026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.346080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.346126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.346167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.346215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.346256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.044 [2024-07-24 19:42:20.346297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.346997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.347934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.348434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.348479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.348522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.348566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.348602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.348641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.348678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.348716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.348754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.348795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.348831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.348868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.348908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.348946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.348987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.349029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.349082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.349125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.349171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.349215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.349260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.349306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.349359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.349414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.349462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.349512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.349559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.045 [2024-07-24 19:42:20.349606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.349652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.349699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.349746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.349797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.349843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.349888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.349934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.349981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.350991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.351037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.351090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.351143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.351190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.351383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.351723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.351767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.351805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.351847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.351888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.351936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.351988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.352991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.353997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.354049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.354096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.354142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.354189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.354243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.046 [2024-07-24 19:42:20.354287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.354331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.354381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.354426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.354471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.354979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.355973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.356999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.047 [2024-07-24 19:42:20.357817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.984 19:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.244 19:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:30.244 19:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:30.244 true 00:06:30.244 19:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:30.244 19:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.185 19:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.445 19:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:31.445 19:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:31.445 true 00:06:31.445 19:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:31.445 19:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.704 19:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.979 19:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:31.979 19:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:31.979 true 00:06:31.979 19:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:31.979 19:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.362 19:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.362 19:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:33.362 19:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:33.638 true 00:06:33.638 19:42:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:33.638 19:42:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.490 19:42:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.490 19:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:34.490 19:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:34.750 true 00:06:34.750 19:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:34.750 19:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.010 19:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.010 19:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:35.010 19:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:35.270 true 00:06:35.270 19:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:35.270 19:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.651 19:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.651 19:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:36.651 19:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:36.911 true 00:06:36.911 19:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:36.911 19:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.847 19:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.847 19:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:37.847 19:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:38.106 true 00:06:38.106 19:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:38.106 19:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.106 19:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.365 19:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:38.365 19:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:38.623 true 00:06:38.623 19:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:38.623 19:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.999 19:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.999 19:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:39.999 19:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:39.999 true 00:06:40.257 19:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:40.257 19:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.824 19:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.082 19:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:41.082 19:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:41.341 true 00:06:41.341 19:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:41.341 19:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.599 19:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.599 19:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:41.599 19:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:41.859 true 00:06:41.859 19:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:41.859 19:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.236 19:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.236 19:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:43.236 19:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:43.494 true 00:06:43.494 19:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:43.494 19:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.429 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.429 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:44.429 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:44.688 true 00:06:44.688 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:44.688 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.688 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.946 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:44.946 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:45.204 true 00:06:45.204 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:45.204 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.398 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.398 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:46.398 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:46.656 true 00:06:46.656 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:46.656 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.590 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.590 19:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:47.590 19:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:47.848 true 00:06:47.848 19:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:47.848 19:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.106 19:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.106 19:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:48.106 19:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:48.365 true 00:06:48.365 19:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:48.365 19:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.741 19:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.741 19:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:49.741 19:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:49.741 true 00:06:50.000 19:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:50.000 19:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.935 19:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.935 19:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:50.935 19:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:50.935 true 00:06:51.193 19:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:51.193 19:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.193 19:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.452 19:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:51.452 19:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:51.710 true 00:06:51.710 19:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:51.710 19:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.700 19:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.958 19:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:52.958 19:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:53.216 true 00:06:53.216 19:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:53.216 19:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.151 19:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.151 Initializing NVMe Controllers 00:06:54.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:54.151 Controller IO queue size 128, less than required. 00:06:54.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:54.151 Controller IO queue size 128, less than required. 00:06:54.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:54.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:54.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:54.151 Initialization complete. Launching workers. 00:06:54.151 ======================================================== 00:06:54.151 Latency(us) 00:06:54.151 Device Information : IOPS MiB/s Average min max 00:06:54.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2596.60 1.27 35212.49 2150.58 1114398.07 00:06:54.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17630.17 8.61 7241.92 2245.11 306300.90 00:06:54.151 ======================================================== 00:06:54.151 Total : 20226.77 9.88 10832.63 2150.58 1114398.07 00:06:54.151 00:06:54.151 19:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:54.151 19:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:54.409 true 00:06:54.409 19:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1893562 00:06:54.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1893562) - No such process 00:06:54.409 19:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1893562 00:06:54.409 19:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.409 19:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.668 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:54.668 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:54.668 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:54.668 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.668 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:54.926 null0 00:06:54.926 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:54.926 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.926 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:54.926 null1 00:06:54.926 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:54.926 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.926 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:55.184 null2 00:06:55.184 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:55.184 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.184 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:55.442 null3 00:06:55.442 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:55.442 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.442 19:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:55.701 null4 00:06:55.701 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:55.701 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.701 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:55.701 null5 00:06:55.701 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:55.701 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.701 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:55.959 null6 00:06:55.959 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:55.959 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:55.959 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:56.219 null7 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1899169 1899171 1899172 1899174 1899176 1899178 1899181 1899182 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.219 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.478 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.478 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.478 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.478 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.478 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.478 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.478 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.478 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.478 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.478 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.478 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.478 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.478 19:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.478 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.478 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.478 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.478 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.478 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.478 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.478 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.478 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.478 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.478 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.478 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.478 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.478 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.478 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.478 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.736 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.736 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.736 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.736 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.736 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.736 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.737 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.737 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.995 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.996 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.996 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.254 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.513 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.513 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.513 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.513 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.513 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.513 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.513 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.513 19:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.772 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.031 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.031 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.031 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.031 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.031 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.031 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.031 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.032 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.290 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.290 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.290 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.290 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.290 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.290 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.290 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.290 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.290 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.290 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.290 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.290 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.290 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.290 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.549 19:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.549 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.549 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.549 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.549 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.549 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.549 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.549 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.549 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.808 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.067 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.068 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.068 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.068 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.068 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.068 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.068 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.327 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.327 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.327 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.327 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.327 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.327 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.327 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.327 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.327 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.327 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.327 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.327 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.327 19:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.586 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.845 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.845 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.845 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.845 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.845 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.845 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.846 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:00.105 rmmod nvme_tcp 00:07:00.105 rmmod nvme_fabrics 00:07:00.105 rmmod nvme_keyring 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1893181 ']' 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1893181 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1893181 ']' 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1893181 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1893181 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1893181' 00:07:00.105 killing process with pid 1893181 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1893181 00:07:00.105 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1893181 00:07:00.364 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:00.364 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:00.364 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:00.364 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:00.364 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:00.364 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.364 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.364 19:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.268 19:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:02.268 00:07:02.268 real 0m47.293s 00:07:02.268 user 3m10.856s 00:07:02.268 sys 0m15.104s 00:07:02.268 19:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.268 19:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:02.268 ************************************ 00:07:02.268 END TEST nvmf_ns_hotplug_stress 00:07:02.268 ************************************ 00:07:02.268 19:42:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:02.268 19:42:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:02.268 19:42:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.268 19:42:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:02.527 ************************************ 00:07:02.527 START TEST nvmf_delete_subsystem 00:07:02.527 ************************************ 00:07:02.527 19:42:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:02.527 * Looking for test storage... 00:07:02.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.527 19:42:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.527 19:42:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:02.527 19:42:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.527 19:42:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.527 19:42:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.527 19:42:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.527 19:42:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.527 19:42:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.527 19:42:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.527 19:42:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.527 19:42:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.527 19:42:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.527 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:02.527 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:02.527 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.527 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.527 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.527 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.527 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.527 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.527 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.527 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.527 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.527 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.527 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.527 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:02.527 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:02.528 19:42:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:07.799 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:07.799 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:07.799 Found net devices under 0000:86:00.0: cvl_0_0 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.799 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:07.800 Found net devices under 0000:86:00.1: cvl_0_1 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:07.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:07:07.800 00:07:07.800 --- 10.0.0.2 ping statistics --- 00:07:07.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.800 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:07:07.800 00:07:07.800 --- 10.0.0.1 ping statistics --- 00:07:07.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.800 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1903537 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1903537 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1903537 ']' 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.800 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.800 [2024-07-24 19:42:59.376297] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:07:07.800 [2024-07-24 19:42:59.376342] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.059 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.059 [2024-07-24 19:42:59.430511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.059 [2024-07-24 19:42:59.509164] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.059 [2024-07-24 19:42:59.509202] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.059 [2024-07-24 19:42:59.509209] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.059 [2024-07-24 19:42:59.509215] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.059 [2024-07-24 19:42:59.509221] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.059 [2024-07-24 19:42:59.509275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.059 [2024-07-24 19:42:59.509277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.059 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.059 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:08.059 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:08.059 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:08.059 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.059 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.059 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:08.059 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.059 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.059 [2024-07-24 19:42:59.647461] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.059 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.059 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:08.059 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.059 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.347 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.347 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:08.347 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.347 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.347 [2024-07-24 19:42:59.671656] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.347 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.347 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:08.347 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.347 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.347 NULL1 00:07:08.347 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.347 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:08.347 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.347 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.347 Delay0 00:07:08.347 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.347 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.347 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.347 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.348 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.348 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1903572 00:07:08.348 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:08.348 19:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:08.348 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.348 [2024-07-24 19:42:59.749286] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:10.258 19:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:10.258 19:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.258 19:43:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 [2024-07-24 19:43:01.807809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aa710 is same with the state(5) to be set 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 Write completed with error (sct=0, sc=8) 00:07:10.258 starting I/O failed: -6 00:07:10.258 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 starting I/O failed: -6 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 starting I/O failed: -6 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 starting I/O failed: -6 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 starting I/O failed: -6 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 [2024-07-24 19:43:01.808221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb57800d660 is same with the state(5) to be set 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Read completed with error (sct=0, sc=8) 00:07:10.259 Write completed with error (sct=0, sc=8) 00:07:11.200 [2024-07-24 19:43:02.766638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23abac0 is same with the state(5) to be set 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 [2024-07-24 19:43:02.809898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aa3e0 is same with the state(5) to be set 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 [2024-07-24 19:43:02.810041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aa000 is same with the state(5) to be set 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 [2024-07-24 19:43:02.810176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aaa40 is same with the state(5) to be set 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 Write completed with error (sct=0, sc=8) 00:07:11.461 Read completed with error (sct=0, sc=8) 00:07:11.461 [2024-07-24 19:43:02.810729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb57800d330 is same with the state(5) to be set 00:07:11.461 Initializing NVMe Controllers 00:07:11.461 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:11.461 Controller IO queue size 128, less than required. 00:07:11.461 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:11.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:11.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:11.461 Initialization complete. Launching workers. 00:07:11.461 ======================================================== 00:07:11.461 Latency(us) 00:07:11.461 Device Information : IOPS MiB/s Average min max 00:07:11.461 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 181.21 0.09 955560.30 1078.87 1011544.46 00:07:11.461 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.40 0.08 880509.34 225.17 1012617.41 00:07:11.461 ======================================================== 00:07:11.461 Total : 335.62 0.16 921032.42 225.17 1012617.41 00:07:11.461 00:07:11.461 [2024-07-24 19:43:02.811103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23abac0 (9): Bad file descriptor 00:07:11.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:11.461 19:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.461 19:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:11.461 19:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1903572 00:07:11.461 19:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:11.722 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:11.722 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1903572 00:07:11.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1903572) - No such process 00:07:11.722 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1903572 00:07:11.722 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:11.722 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1903572 00:07:11.722 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:11.722 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.722 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:11.981 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.981 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1903572 00:07:11.981 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:11.981 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.981 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:11.981 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.981 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:11.981 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.981 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.981 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.981 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.981 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.981 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.982 [2024-07-24 19:43:03.336163] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.982 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.982 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.982 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.982 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.982 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.982 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1904254 00:07:11.982 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:11.982 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:11.982 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1904254 00:07:11.982 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:11.982 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.982 [2024-07-24 19:43:03.400683] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:12.551 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:12.551 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1904254 00:07:12.551 19:43:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:12.811 19:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:12.811 19:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1904254 00:07:12.811 19:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:13.379 19:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:13.379 19:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1904254 00:07:13.379 19:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:13.949 19:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:13.949 19:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1904254 00:07:13.949 19:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:14.519 19:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:14.519 19:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1904254 00:07:14.519 19:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:14.779 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:14.779 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1904254 00:07:14.779 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:15.349 Initializing NVMe Controllers 00:07:15.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:15.349 Controller IO queue size 128, less than required. 00:07:15.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:15.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:15.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:15.349 Initialization complete. Launching workers. 00:07:15.349 ======================================================== 00:07:15.349 Latency(us) 00:07:15.349 Device Information : IOPS MiB/s Average min max 00:07:15.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004071.88 1000387.74 1011619.78 00:07:15.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005195.43 1000671.18 1012294.47 00:07:15.350 ======================================================== 00:07:15.350 Total : 256.00 0.12 1004633.66 1000387.74 1012294.47 00:07:15.350 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1904254 00:07:15.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1904254) - No such process 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1904254 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:15.350 rmmod nvme_tcp 00:07:15.350 rmmod nvme_fabrics 00:07:15.350 rmmod nvme_keyring 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1903537 ']' 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1903537 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1903537 ']' 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1903537 00:07:15.350 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:15.610 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.610 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1903537 00:07:15.610 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.610 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.610 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1903537' 00:07:15.610 killing process with pid 1903537 00:07:15.610 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1903537 00:07:15.610 19:43:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1903537 00:07:15.610 19:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:15.610 19:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:15.610 19:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:15.610 19:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:15.610 19:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:15.610 19:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.610 19:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.610 19:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:18.150 00:07:18.150 real 0m15.344s 00:07:18.150 user 0m29.038s 00:07:18.150 sys 0m4.707s 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.150 ************************************ 00:07:18.150 END TEST nvmf_delete_subsystem 00:07:18.150 ************************************ 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.150 ************************************ 00:07:18.150 START TEST nvmf_host_management 00:07:18.150 ************************************ 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:18.150 * Looking for test storage... 00:07:18.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.150 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:18.151 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:18.151 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:18.151 19:43:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:23.432 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:23.432 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:23.432 Found net devices under 0000:86:00.0: cvl_0_0 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:23.432 Found net devices under 0000:86:00.1: cvl_0_1 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:23.432 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:23.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:07:23.433 00:07:23.433 --- 10.0.0.2 ping statistics --- 00:07:23.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.433 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:07:23.433 00:07:23.433 --- 10.0.0.1 ping statistics --- 00:07:23.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.433 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1908252 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1908252 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1908252 ']' 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.433 19:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:23.433 [2024-07-24 19:43:14.810170] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:07:23.433 [2024-07-24 19:43:14.810212] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.433 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.433 [2024-07-24 19:43:14.867133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.433 [2024-07-24 19:43:14.948158] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.433 [2024-07-24 19:43:14.948194] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.433 [2024-07-24 19:43:14.948201] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.433 [2024-07-24 19:43:14.948207] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.433 [2024-07-24 19:43:14.948212] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.433 [2024-07-24 19:43:14.948264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.433 [2024-07-24 19:43:14.948347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.433 [2024-07-24 19:43:14.948456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.433 [2024-07-24 19:43:14.948457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.375 [2024-07-24 19:43:15.667340] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.375 Malloc0 00:07:24.375 [2024-07-24 19:43:15.727159] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1908518 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1908518 /var/tmp/bdevperf.sock 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1908518 ']' 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:24.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:24.375 { 00:07:24.375 "params": { 00:07:24.375 "name": "Nvme$subsystem", 00:07:24.375 "trtype": "$TEST_TRANSPORT", 00:07:24.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:24.375 "adrfam": "ipv4", 00:07:24.375 "trsvcid": "$NVMF_PORT", 00:07:24.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:24.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:24.375 "hdgst": ${hdgst:-false}, 00:07:24.375 "ddgst": ${ddgst:-false} 00:07:24.375 }, 00:07:24.375 "method": "bdev_nvme_attach_controller" 00:07:24.375 } 00:07:24.375 EOF 00:07:24.375 )") 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:24.375 19:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:24.375 "params": { 00:07:24.375 "name": "Nvme0", 00:07:24.375 "trtype": "tcp", 00:07:24.375 "traddr": "10.0.0.2", 00:07:24.375 "adrfam": "ipv4", 00:07:24.375 "trsvcid": "4420", 00:07:24.375 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:24.375 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:24.375 "hdgst": false, 00:07:24.375 "ddgst": false 00:07:24.375 }, 00:07:24.375 "method": "bdev_nvme_attach_controller" 00:07:24.375 }' 00:07:24.375 [2024-07-24 19:43:15.818740] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:07:24.375 [2024-07-24 19:43:15.818783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908518 ] 00:07:24.375 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.375 [2024-07-24 19:43:15.874945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.375 [2024-07-24 19:43:15.947659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.635 Running I/O for 10 seconds... 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=578 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 578 -ge 100 ']' 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.207 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.207 [2024-07-24 19:43:16.722475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.207 [2024-07-24 19:43:16.722626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.722882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea5c0 is same with the state(5) to be set 00:07:25.208 [2024-07-24 19:43:16.723570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.208 [2024-07-24 19:43:16.723601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.208 [2024-07-24 19:43:16.723617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.208 [2024-07-24 19:43:16.723625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.208 [2024-07-24 19:43:16.723633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.208 [2024-07-24 19:43:16.723641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.208 [2024-07-24 19:43:16.723649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.208 [2024-07-24 19:43:16.723656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.208 [2024-07-24 19:43:16.723664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.208 [2024-07-24 19:43:16.723671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.208 [2024-07-24 19:43:16.723679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.208 [2024-07-24 19:43:16.723686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.208 [2024-07-24 19:43:16.723694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.208 [2024-07-24 19:43:16.723701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.208 [2024-07-24 19:43:16.723709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.208 [2024-07-24 19:43:16.723716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.208 [2024-07-24 19:43:16.723724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.208 [2024-07-24 19:43:16.723731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.208 [2024-07-24 19:43:16.723739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.208 [2024-07-24 19:43:16.723746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.208 [2024-07-24 19:43:16.723754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.208 [2024-07-24 19:43:16.723761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.208 [2024-07-24 19:43:16.723769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.208 [2024-07-24 19:43:16.723780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.208 [2024-07-24 19:43:16.723789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.208 [2024-07-24 19:43:16.723796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.208 [2024-07-24 19:43:16.723804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.208 [2024-07-24 19:43:16.723811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.208 [2024-07-24 19:43:16.723821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.208 [2024-07-24 19:43:16.723827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.723836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.723843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.723852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.723859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.723867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.723873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.723881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.723888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.723897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.723903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.723911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.723918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.723926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.723933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.723941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.723948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.723956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.723962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.723975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.723982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.723990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.723997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.209 [2024-07-24 19:43:16.724369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.209 [2024-07-24 19:43:16.724377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.210 [2024-07-24 19:43:16.724384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.210 [2024-07-24 19:43:16.724400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.210 [2024-07-24 19:43:16.724414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.210 [2024-07-24 19:43:16.724429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.210 [2024-07-24 19:43:16.724444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.210 [2024-07-24 19:43:16.724459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.210 [2024-07-24 19:43:16.724474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.210 [2024-07-24 19:43:16.724489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.210 [2024-07-24 19:43:16.724504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.210 [2024-07-24 19:43:16.724519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.210 [2024-07-24 19:43:16.724534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.210 [2024-07-24 19:43:16.724550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.210 [2024-07-24 19:43:16.724566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.210 [2024-07-24 19:43:16.724580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5660 is same with the state(5) to be set 00:07:25.210 [2024-07-24 19:43:16.724639] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xad5660 was disconnected and freed. reset controller. 00:07:25.210 [2024-07-24 19:43:16.724678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.210 [2024-07-24 19:43:16.724687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.210 [2024-07-24 19:43:16.724701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.210 [2024-07-24 19:43:16.724715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.210 [2024-07-24 19:43:16.724729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.724736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3980 is same with the state(5) to be set 00:07:25.210 [2024-07-24 19:43:16.725671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:25.210 task offset: 73728 on job bdev=Nvme0n1 fails 00:07:25.210 00:07:25.210 Latency(us) 00:07:25.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.210 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:25.210 Job: Nvme0n1 ended in about 0.62 seconds with error 00:07:25.210 Verification LBA range: start 0x0 length 0x400 00:07:25.210 Nvme0n1 : 0.62 925.79 57.86 102.87 0.00 61189.16 11283.59 57671.68 00:07:25.210 =================================================================================================================== 00:07:25.210 Total : 925.79 57.86 102.87 0.00 61189.16 11283.59 57671.68 00:07:25.210 [2024-07-24 19:43:16.727287] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.210 [2024-07-24 19:43:16.727301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a3980 (9): Bad file descriptor 00:07:25.210 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.210 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:25.210 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.210 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.210 [2024-07-24 19:43:16.730127] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:25.210 [2024-07-24 19:43:16.730276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:25.210 [2024-07-24 19:43:16.730302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.210 [2024-07-24 19:43:16.730315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:25.210 [2024-07-24 19:43:16.730322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:25.210 [2024-07-24 19:43:16.730330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:25.210 [2024-07-24 19:43:16.730337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6a3980 00:07:25.210 [2024-07-24 19:43:16.730360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a3980 (9): Bad file descriptor 00:07:25.210 [2024-07-24 19:43:16.730370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:25.210 [2024-07-24 19:43:16.730377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:25.210 [2024-07-24 19:43:16.730385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:25.210 [2024-07-24 19:43:16.730397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:25.210 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.210 19:43:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:26.154 19:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1908518 00:07:26.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1908518) - No such process 00:07:26.154 19:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:26.154 19:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:26.154 19:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:26.154 19:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:26.154 19:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:26.154 19:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:26.154 19:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:26.154 19:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:26.154 { 00:07:26.154 "params": { 00:07:26.154 "name": "Nvme$subsystem", 00:07:26.154 "trtype": "$TEST_TRANSPORT", 00:07:26.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:26.154 "adrfam": "ipv4", 00:07:26.154 "trsvcid": "$NVMF_PORT", 00:07:26.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:26.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:26.154 "hdgst": ${hdgst:-false}, 00:07:26.154 "ddgst": ${ddgst:-false} 00:07:26.154 }, 00:07:26.154 "method": "bdev_nvme_attach_controller" 00:07:26.154 } 00:07:26.154 EOF 00:07:26.154 )") 00:07:26.413 19:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:26.413 19:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:26.413 19:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:26.413 19:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:26.413 "params": { 00:07:26.413 "name": "Nvme0", 00:07:26.413 "trtype": "tcp", 00:07:26.413 "traddr": "10.0.0.2", 00:07:26.413 "adrfam": "ipv4", 00:07:26.413 "trsvcid": "4420", 00:07:26.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:26.413 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:26.413 "hdgst": false, 00:07:26.413 "ddgst": false 00:07:26.413 }, 00:07:26.413 "method": "bdev_nvme_attach_controller" 00:07:26.413 }' 00:07:26.413 [2024-07-24 19:43:17.791366] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:07:26.413 [2024-07-24 19:43:17.791415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908771 ] 00:07:26.413 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.413 [2024-07-24 19:43:17.846795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.413 [2024-07-24 19:43:17.919036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.672 Running I/O for 1 seconds... 00:07:27.614 00:07:27.614 Latency(us) 00:07:27.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.614 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:27.614 Verification LBA range: start 0x0 length 0x400 00:07:27.614 Nvme0n1 : 1.04 981.33 61.33 0.00 0.00 64440.24 12366.36 62002.75 00:07:27.614 =================================================================================================================== 00:07:27.614 Total : 981.33 61.33 0.00 0.00 64440.24 12366.36 62002.75 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:27.875 rmmod nvme_tcp 00:07:27.875 rmmod nvme_fabrics 00:07:27.875 rmmod nvme_keyring 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1908252 ']' 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1908252 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1908252 ']' 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1908252 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1908252 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1908252' 00:07:27.875 killing process with pid 1908252 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1908252 00:07:27.875 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1908252 00:07:28.198 [2024-07-24 19:43:19.607765] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:28.198 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:28.199 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:28.199 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:28.199 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:28.199 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:28.199 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.199 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.199 19:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.132 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:30.132 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:30.132 00:07:30.132 real 0m12.385s 00:07:30.132 user 0m22.699s 00:07:30.132 sys 0m5.070s 00:07:30.132 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.132 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:30.132 ************************************ 00:07:30.132 END TEST nvmf_host_management 00:07:30.132 ************************************ 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.394 ************************************ 00:07:30.394 START TEST nvmf_lvol 00:07:30.394 ************************************ 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:30.394 * Looking for test storage... 00:07:30.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:07:30.394 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:35.777 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:35.778 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:35.778 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:35.778 Found net devices under 0000:86:00.0: cvl_0_0 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:35.778 Found net devices under 0000:86:00.1: cvl_0_1 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:35.778 19:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:35.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:07:35.778 00:07:35.778 --- 10.0.0.2 ping statistics --- 00:07:35.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.778 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:07:35.778 00:07:35.778 --- 10.0.0.1 ping statistics --- 00:07:35.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.778 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1912538 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1912538 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1912538 ']' 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.778 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:35.778 [2024-07-24 19:43:27.325362] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:07:35.778 [2024-07-24 19:43:27.325404] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.778 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.038 [2024-07-24 19:43:27.381588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.038 [2024-07-24 19:43:27.462596] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.038 [2024-07-24 19:43:27.462632] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.038 [2024-07-24 19:43:27.462639] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.038 [2024-07-24 19:43:27.462644] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.038 [2024-07-24 19:43:27.462649] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.039 [2024-07-24 19:43:27.462704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.039 [2024-07-24 19:43:27.462798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.039 [2024-07-24 19:43:27.462799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.610 19:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.610 19:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:36.610 19:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:36.610 19:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:36.610 19:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:36.610 19:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.610 19:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:36.871 [2024-07-24 19:43:28.340435] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.871 19:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:37.131 19:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:37.131 19:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:37.391 19:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:37.391 19:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:37.391 19:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:37.652 19:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=02903b23-a512-49e8-a745-5507628616dc 00:07:37.652 19:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 02903b23-a512-49e8-a745-5507628616dc lvol 20 00:07:37.912 19:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cab6ff19-12d9-4ef8-a05e-3b1335414aaa 00:07:37.912 19:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:37.912 19:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cab6ff19-12d9-4ef8-a05e-3b1335414aaa 00:07:38.172 19:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:38.433 [2024-07-24 19:43:29.820741] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.433 19:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:38.433 19:43:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1913030 00:07:38.433 19:43:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:38.433 19:43:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:38.693 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.633 19:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot cab6ff19-12d9-4ef8-a05e-3b1335414aaa MY_SNAPSHOT 00:07:39.894 19:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a8f2de94-5aa9-4cae-b869-5da3a92c3e22 00:07:39.894 19:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize cab6ff19-12d9-4ef8-a05e-3b1335414aaa 30 00:07:39.894 19:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a8f2de94-5aa9-4cae-b869-5da3a92c3e22 MY_CLONE 00:07:40.154 19:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=db8b1dac-b262-4ab5-8232-3995305a594d 00:07:40.154 19:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate db8b1dac-b262-4ab5-8232-3995305a594d 00:07:40.725 19:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1913030 00:07:48.860 Initializing NVMe Controllers 00:07:48.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:48.860 Controller IO queue size 128, less than required. 00:07:48.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:48.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:48.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:48.860 Initialization complete. Launching workers. 00:07:48.860 ======================================================== 00:07:48.860 Latency(us) 00:07:48.860 Device Information : IOPS MiB/s Average min max 00:07:48.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11999.50 46.87 10670.51 1804.17 58195.57 00:07:48.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11832.30 46.22 10821.45 2995.75 64805.56 00:07:48.860 ======================================================== 00:07:48.860 Total : 23831.80 93.09 10745.45 1804.17 64805.56 00:07:48.860 00:07:48.860 19:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:49.120 19:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cab6ff19-12d9-4ef8-a05e-3b1335414aaa 00:07:49.380 19:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 02903b23-a512-49e8-a745-5507628616dc 00:07:49.640 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:49.640 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:49.640 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:49.640 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:49.640 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:49.640 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:49.640 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:49.640 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:49.641 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:49.641 rmmod nvme_tcp 00:07:49.641 rmmod nvme_fabrics 00:07:49.641 rmmod nvme_keyring 00:07:49.641 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:49.641 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:49.641 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:49.641 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1912538 ']' 00:07:49.641 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1912538 00:07:49.641 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1912538 ']' 00:07:49.641 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1912538 00:07:49.641 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:49.641 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.641 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1912538 00:07:49.641 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.641 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.641 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1912538' 00:07:49.641 killing process with pid 1912538 00:07:49.641 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1912538 00:07:49.641 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1912538 00:07:49.901 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:49.901 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:49.901 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:49.901 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:49.901 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:49.901 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.901 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.901 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:52.443 00:07:52.443 real 0m21.649s 00:07:52.443 user 1m4.253s 00:07:52.443 sys 0m6.751s 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:52.443 ************************************ 00:07:52.443 END TEST nvmf_lvol 00:07:52.443 ************************************ 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.443 ************************************ 00:07:52.443 START TEST nvmf_lvs_grow 00:07:52.443 ************************************ 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:52.443 * Looking for test storage... 00:07:52.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:52.443 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:07:52.444 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:57.729 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:57.729 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:57.729 Found net devices under 0000:86:00.0: cvl_0_0 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.729 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:57.730 Found net devices under 0000:86:00.1: cvl_0_1 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:57.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:07:57.730 00:07:57.730 --- 10.0.0.2 ping statistics --- 00:07:57.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.730 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:57.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:07:57.730 00:07:57.730 --- 10.0.0.1 ping statistics --- 00:07:57.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.730 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1918294 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1918294 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1918294 ']' 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.730 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:57.730 [2024-07-24 19:43:48.903864] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:07:57.730 [2024-07-24 19:43:48.903910] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.730 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.730 [2024-07-24 19:43:48.962963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.730 [2024-07-24 19:43:49.039540] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.730 [2024-07-24 19:43:49.039579] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.730 [2024-07-24 19:43:49.039586] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.730 [2024-07-24 19:43:49.039591] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.730 [2024-07-24 19:43:49.039596] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.730 [2024-07-24 19:43:49.039620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.302 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.302 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:58.302 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:58.302 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:58.302 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:58.302 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.302 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:58.302 [2024-07-24 19:43:49.886914] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.562 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:58.562 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.562 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.562 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:58.562 ************************************ 00:07:58.562 START TEST lvs_grow_clean 00:07:58.562 ************************************ 00:07:58.562 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:58.562 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:58.562 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:58.562 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:58.562 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:58.562 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:58.562 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:58.562 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:58.562 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:58.562 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:58.562 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:58.562 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:58.822 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3e239f75-10a5-4f71-bb8a-53051d9d181a 00:07:58.822 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e239f75-10a5-4f71-bb8a-53051d9d181a 00:07:58.822 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:59.082 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:59.082 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:59.082 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3e239f75-10a5-4f71-bb8a-53051d9d181a lvol 150 00:07:59.082 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d27a45b0-17c0-4093-8aa5-ee4e21cc2eb8 00:07:59.082 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:59.082 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:59.342 [2024-07-24 19:43:50.825431] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:59.342 [2024-07-24 19:43:50.825482] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:59.342 true 00:07:59.342 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e239f75-10a5-4f71-bb8a-53051d9d181a 00:07:59.342 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:59.600 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:59.600 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:59.600 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d27a45b0-17c0-4093-8aa5-ee4e21cc2eb8 00:07:59.860 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:00.120 [2024-07-24 19:43:51.487443] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.120 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.120 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1918835 00:08:00.120 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:00.120 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:00.120 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1918835 /var/tmp/bdevperf.sock 00:08:00.120 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1918835 ']' 00:08:00.120 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:00.120 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.120 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:00.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:00.120 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.120 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:00.120 [2024-07-24 19:43:51.706430] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:08:00.120 [2024-07-24 19:43:51.706477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1918835 ] 00:08:00.380 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.380 [2024-07-24 19:43:51.760594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.380 [2024-07-24 19:43:51.839637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.948 19:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.948 19:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:00.948 19:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:01.517 Nvme0n1 00:08:01.517 19:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:01.517 [ 00:08:01.517 { 00:08:01.517 "name": "Nvme0n1", 00:08:01.517 "aliases": [ 00:08:01.517 "d27a45b0-17c0-4093-8aa5-ee4e21cc2eb8" 00:08:01.517 ], 00:08:01.517 "product_name": "NVMe disk", 00:08:01.517 "block_size": 4096, 00:08:01.517 "num_blocks": 38912, 00:08:01.517 "uuid": "d27a45b0-17c0-4093-8aa5-ee4e21cc2eb8", 00:08:01.517 "assigned_rate_limits": { 00:08:01.517 "rw_ios_per_sec": 0, 00:08:01.517 "rw_mbytes_per_sec": 0, 00:08:01.517 "r_mbytes_per_sec": 0, 00:08:01.517 "w_mbytes_per_sec": 0 00:08:01.517 }, 00:08:01.517 "claimed": false, 00:08:01.517 "zoned": false, 00:08:01.517 "supported_io_types": { 00:08:01.517 "read": true, 00:08:01.517 "write": true, 00:08:01.517 "unmap": true, 00:08:01.517 "flush": true, 00:08:01.517 "reset": true, 00:08:01.517 "nvme_admin": true, 00:08:01.517 "nvme_io": true, 00:08:01.517 "nvme_io_md": false, 00:08:01.517 "write_zeroes": true, 00:08:01.517 "zcopy": false, 00:08:01.517 "get_zone_info": false, 00:08:01.517 "zone_management": false, 00:08:01.517 "zone_append": false, 00:08:01.517 "compare": true, 00:08:01.517 "compare_and_write": true, 00:08:01.517 "abort": true, 00:08:01.517 "seek_hole": false, 00:08:01.517 "seek_data": false, 00:08:01.517 "copy": true, 00:08:01.517 "nvme_iov_md": false 00:08:01.517 }, 00:08:01.517 "memory_domains": [ 00:08:01.517 { 00:08:01.517 "dma_device_id": "system", 00:08:01.517 "dma_device_type": 1 00:08:01.517 } 00:08:01.517 ], 00:08:01.517 "driver_specific": { 00:08:01.517 "nvme": [ 00:08:01.517 { 00:08:01.517 "trid": { 00:08:01.517 "trtype": "TCP", 00:08:01.517 "adrfam": "IPv4", 00:08:01.517 "traddr": "10.0.0.2", 00:08:01.517 "trsvcid": "4420", 00:08:01.517 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:01.517 }, 00:08:01.517 "ctrlr_data": { 00:08:01.517 "cntlid": 1, 00:08:01.517 "vendor_id": "0x8086", 00:08:01.517 "model_number": "SPDK bdev Controller", 00:08:01.517 "serial_number": "SPDK0", 00:08:01.517 "firmware_revision": "24.09", 00:08:01.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:01.517 "oacs": { 00:08:01.517 "security": 0, 00:08:01.517 "format": 0, 00:08:01.517 "firmware": 0, 00:08:01.517 "ns_manage": 0 00:08:01.517 }, 00:08:01.517 "multi_ctrlr": true, 00:08:01.517 "ana_reporting": false 00:08:01.517 }, 00:08:01.517 "vs": { 00:08:01.517 "nvme_version": "1.3" 00:08:01.517 }, 00:08:01.517 "ns_data": { 00:08:01.517 "id": 1, 00:08:01.517 "can_share": true 00:08:01.517 } 00:08:01.517 } 00:08:01.517 ], 00:08:01.517 "mp_policy": "active_passive" 00:08:01.517 } 00:08:01.517 } 00:08:01.517 ] 00:08:01.517 19:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:01.517 19:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1919058 00:08:01.517 19:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:01.517 Running I/O for 10 seconds... 00:08:02.898 Latency(us) 00:08:02.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.899 Nvme0n1 : 1.00 22138.00 86.48 0.00 0.00 0.00 0.00 0.00 00:08:02.899 =================================================================================================================== 00:08:02.899 Total : 22138.00 86.48 0.00 0.00 0.00 0.00 0.00 00:08:02.899 00:08:03.467 19:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3e239f75-10a5-4f71-bb8a-53051d9d181a 00:08:03.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.727 Nvme0n1 : 2.00 22503.00 87.90 0.00 0.00 0.00 0.00 0.00 00:08:03.727 =================================================================================================================== 00:08:03.727 Total : 22503.00 87.90 0.00 0.00 0.00 0.00 0.00 00:08:03.727 00:08:03.727 true 00:08:03.727 19:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e239f75-10a5-4f71-bb8a-53051d9d181a 00:08:03.727 19:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:03.988 19:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:03.988 19:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:03.988 19:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1919058 00:08:04.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.558 Nvme0n1 : 3.00 22742.33 88.84 0.00 0.00 0.00 0.00 0.00 00:08:04.558 =================================================================================================================== 00:08:04.558 Total : 22742.33 88.84 0.00 0.00 0.00 0.00 0.00 00:08:04.558 00:08:05.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.940 Nvme0n1 : 4.00 22672.00 88.56 0.00 0.00 0.00 0.00 0.00 00:08:05.940 =================================================================================================================== 00:08:05.940 Total : 22672.00 88.56 0.00 0.00 0.00 0.00 0.00 00:08:05.940 00:08:06.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.879 Nvme0n1 : 5.00 22679.00 88.59 0.00 0.00 0.00 0.00 0.00 00:08:06.879 =================================================================================================================== 00:08:06.879 Total : 22679.00 88.59 0.00 0.00 0.00 0.00 0.00 00:08:06.879 00:08:07.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.819 Nvme0n1 : 6.00 22728.67 88.78 0.00 0.00 0.00 0.00 0.00 00:08:07.819 =================================================================================================================== 00:08:07.820 Total : 22728.67 88.78 0.00 0.00 0.00 0.00 0.00 00:08:07.820 00:08:08.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.756 Nvme0n1 : 7.00 22786.29 89.01 0.00 0.00 0.00 0.00 0.00 00:08:08.756 =================================================================================================================== 00:08:08.756 Total : 22786.29 89.01 0.00 0.00 0.00 0.00 0.00 00:08:08.756 00:08:09.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.695 Nvme0n1 : 8.00 22774.50 88.96 0.00 0.00 0.00 0.00 0.00 00:08:09.695 =================================================================================================================== 00:08:09.695 Total : 22774.50 88.96 0.00 0.00 0.00 0.00 0.00 00:08:09.695 00:08:10.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.635 Nvme0n1 : 9.00 22773.89 88.96 0.00 0.00 0.00 0.00 0.00 00:08:10.635 =================================================================================================================== 00:08:10.635 Total : 22773.89 88.96 0.00 0.00 0.00 0.00 0.00 00:08:10.635 00:08:11.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.575 Nvme0n1 : 10.00 22848.70 89.25 0.00 0.00 0.00 0.00 0.00 00:08:11.575 =================================================================================================================== 00:08:11.575 Total : 22848.70 89.25 0.00 0.00 0.00 0.00 0.00 00:08:11.575 00:08:11.575 00:08:11.575 Latency(us) 00:08:11.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.575 Nvme0n1 : 10.00 22848.80 89.25 0.00 0.00 5598.43 2464.72 23934.89 00:08:11.575 =================================================================================================================== 00:08:11.575 Total : 22848.80 89.25 0.00 0.00 5598.43 2464.72 23934.89 00:08:11.575 0 00:08:11.575 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1918835 00:08:11.575 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1918835 ']' 00:08:11.576 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1918835 00:08:11.576 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:11.576 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.576 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1918835 00:08:11.839 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:11.839 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:11.839 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1918835' 00:08:11.839 killing process with pid 1918835 00:08:11.839 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1918835 00:08:11.839 Received shutdown signal, test time was about 10.000000 seconds 00:08:11.839 00:08:11.839 Latency(us) 00:08:11.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.839 =================================================================================================================== 00:08:11.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:11.839 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1918835 00:08:11.839 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:12.127 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:12.406 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:12.406 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e239f75-10a5-4f71-bb8a-53051d9d181a 00:08:12.406 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:12.406 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:12.406 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:12.666 [2024-07-24 19:44:04.057444] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:12.666 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e239f75-10a5-4f71-bb8a-53051d9d181a 00:08:12.666 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:12.666 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e239f75-10a5-4f71-bb8a-53051d9d181a 00:08:12.666 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.666 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.666 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.666 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.666 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.666 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.666 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.666 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:12.666 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e239f75-10a5-4f71-bb8a-53051d9d181a 00:08:12.926 request: 00:08:12.926 { 00:08:12.926 "uuid": "3e239f75-10a5-4f71-bb8a-53051d9d181a", 00:08:12.926 "method": "bdev_lvol_get_lvstores", 00:08:12.926 "req_id": 1 00:08:12.926 } 00:08:12.926 Got JSON-RPC error response 00:08:12.926 response: 00:08:12.926 { 00:08:12.926 "code": -19, 00:08:12.926 "message": "No such device" 00:08:12.926 } 00:08:12.926 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:12.926 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:12.926 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:12.926 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:12.926 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:12.926 aio_bdev 00:08:12.926 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d27a45b0-17c0-4093-8aa5-ee4e21cc2eb8 00:08:12.926 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=d27a45b0-17c0-4093-8aa5-ee4e21cc2eb8 00:08:12.926 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:12.926 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:12.926 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:12.926 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:12.926 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:13.185 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d27a45b0-17c0-4093-8aa5-ee4e21cc2eb8 -t 2000 00:08:13.445 [ 00:08:13.445 { 00:08:13.445 "name": "d27a45b0-17c0-4093-8aa5-ee4e21cc2eb8", 00:08:13.445 "aliases": [ 00:08:13.445 "lvs/lvol" 00:08:13.445 ], 00:08:13.445 "product_name": "Logical Volume", 00:08:13.445 "block_size": 4096, 00:08:13.445 "num_blocks": 38912, 00:08:13.445 "uuid": "d27a45b0-17c0-4093-8aa5-ee4e21cc2eb8", 00:08:13.445 "assigned_rate_limits": { 00:08:13.445 "rw_ios_per_sec": 0, 00:08:13.445 "rw_mbytes_per_sec": 0, 00:08:13.445 "r_mbytes_per_sec": 0, 00:08:13.445 "w_mbytes_per_sec": 0 00:08:13.445 }, 00:08:13.445 "claimed": false, 00:08:13.445 "zoned": false, 00:08:13.445 "supported_io_types": { 00:08:13.445 "read": true, 00:08:13.445 "write": true, 00:08:13.445 "unmap": true, 00:08:13.445 "flush": false, 00:08:13.446 "reset": true, 00:08:13.446 "nvme_admin": false, 00:08:13.446 "nvme_io": false, 00:08:13.446 "nvme_io_md": false, 00:08:13.446 "write_zeroes": true, 00:08:13.446 "zcopy": false, 00:08:13.446 "get_zone_info": false, 00:08:13.446 "zone_management": false, 00:08:13.446 "zone_append": false, 00:08:13.446 "compare": false, 00:08:13.446 "compare_and_write": false, 00:08:13.446 "abort": false, 00:08:13.446 "seek_hole": true, 00:08:13.446 "seek_data": true, 00:08:13.446 "copy": false, 00:08:13.446 "nvme_iov_md": false 00:08:13.446 }, 00:08:13.446 "driver_specific": { 00:08:13.446 "lvol": { 00:08:13.446 "lvol_store_uuid": "3e239f75-10a5-4f71-bb8a-53051d9d181a", 00:08:13.446 "base_bdev": "aio_bdev", 00:08:13.446 "thin_provision": false, 00:08:13.446 "num_allocated_clusters": 38, 00:08:13.446 "snapshot": false, 00:08:13.446 "clone": false, 00:08:13.446 "esnap_clone": false 00:08:13.446 } 00:08:13.446 } 00:08:13.446 } 00:08:13.446 ] 00:08:13.446 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:13.446 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e239f75-10a5-4f71-bb8a-53051d9d181a 00:08:13.446 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:13.446 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:13.446 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e239f75-10a5-4f71-bb8a-53051d9d181a 00:08:13.446 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:13.706 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:13.706 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d27a45b0-17c0-4093-8aa5-ee4e21cc2eb8 00:08:13.966 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e239f75-10a5-4f71-bb8a-53051d9d181a 00:08:13.966 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:14.227 00:08:14.227 real 0m15.796s 00:08:14.227 user 0m15.423s 00:08:14.227 sys 0m1.438s 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:14.227 ************************************ 00:08:14.227 END TEST lvs_grow_clean 00:08:14.227 ************************************ 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:14.227 ************************************ 00:08:14.227 START TEST lvs_grow_dirty 00:08:14.227 ************************************ 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:14.227 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:14.487 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:14.487 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:14.748 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2b00988f-dc05-4fea-9569-39630d373e5f 00:08:14.748 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b00988f-dc05-4fea-9569-39630d373e5f 00:08:14.748 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:15.008 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:15.008 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:15.008 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2b00988f-dc05-4fea-9569-39630d373e5f lvol 150 00:08:15.008 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=60a1452b-b1c9-4961-ad44-cfdee2b5aff2 00:08:15.008 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:15.008 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:15.268 [2024-07-24 19:44:06.686765] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:15.268 [2024-07-24 19:44:06.686821] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:15.268 true 00:08:15.268 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b00988f-dc05-4fea-9569-39630d373e5f 00:08:15.268 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:15.528 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:15.528 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:15.528 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 60a1452b-b1c9-4961-ad44-cfdee2b5aff2 00:08:15.789 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:15.789 [2024-07-24 19:44:07.384881] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.049 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:16.049 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1921504 00:08:16.049 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:16.049 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:16.049 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1921504 /var/tmp/bdevperf.sock 00:08:16.049 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1921504 ']' 00:08:16.049 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:16.049 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.049 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:16.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:16.049 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.049 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.049 [2024-07-24 19:44:07.612643] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:08:16.049 [2024-07-24 19:44:07.612688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921504 ] 00:08:16.049 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.310 [2024-07-24 19:44:07.665520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.310 [2024-07-24 19:44:07.737365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.880 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.880 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:16.880 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:17.450 Nvme0n1 00:08:17.450 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:17.450 [ 00:08:17.450 { 00:08:17.450 "name": "Nvme0n1", 00:08:17.450 "aliases": [ 00:08:17.450 "60a1452b-b1c9-4961-ad44-cfdee2b5aff2" 00:08:17.450 ], 00:08:17.450 "product_name": "NVMe disk", 00:08:17.450 "block_size": 4096, 00:08:17.450 "num_blocks": 38912, 00:08:17.450 "uuid": "60a1452b-b1c9-4961-ad44-cfdee2b5aff2", 00:08:17.450 "assigned_rate_limits": { 00:08:17.450 "rw_ios_per_sec": 0, 00:08:17.450 "rw_mbytes_per_sec": 0, 00:08:17.450 "r_mbytes_per_sec": 0, 00:08:17.450 "w_mbytes_per_sec": 0 00:08:17.450 }, 00:08:17.450 "claimed": false, 00:08:17.450 "zoned": false, 00:08:17.450 "supported_io_types": { 00:08:17.450 "read": true, 00:08:17.450 "write": true, 00:08:17.450 "unmap": true, 00:08:17.450 "flush": true, 00:08:17.450 "reset": true, 00:08:17.450 "nvme_admin": true, 00:08:17.450 "nvme_io": true, 00:08:17.450 "nvme_io_md": false, 00:08:17.450 "write_zeroes": true, 00:08:17.450 "zcopy": false, 00:08:17.450 "get_zone_info": false, 00:08:17.450 "zone_management": false, 00:08:17.450 "zone_append": false, 00:08:17.450 "compare": true, 00:08:17.450 "compare_and_write": true, 00:08:17.450 "abort": true, 00:08:17.450 "seek_hole": false, 00:08:17.450 "seek_data": false, 00:08:17.450 "copy": true, 00:08:17.450 "nvme_iov_md": false 00:08:17.450 }, 00:08:17.450 "memory_domains": [ 00:08:17.450 { 00:08:17.450 "dma_device_id": "system", 00:08:17.450 "dma_device_type": 1 00:08:17.450 } 00:08:17.450 ], 00:08:17.450 "driver_specific": { 00:08:17.450 "nvme": [ 00:08:17.450 { 00:08:17.450 "trid": { 00:08:17.450 "trtype": "TCP", 00:08:17.450 "adrfam": "IPv4", 00:08:17.450 "traddr": "10.0.0.2", 00:08:17.450 "trsvcid": "4420", 00:08:17.450 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:17.450 }, 00:08:17.450 "ctrlr_data": { 00:08:17.450 "cntlid": 1, 00:08:17.450 "vendor_id": "0x8086", 00:08:17.450 "model_number": "SPDK bdev Controller", 00:08:17.450 "serial_number": "SPDK0", 00:08:17.450 "firmware_revision": "24.09", 00:08:17.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:17.450 "oacs": { 00:08:17.450 "security": 0, 00:08:17.450 "format": 0, 00:08:17.450 "firmware": 0, 00:08:17.450 "ns_manage": 0 00:08:17.450 }, 00:08:17.450 "multi_ctrlr": true, 00:08:17.450 "ana_reporting": false 00:08:17.450 }, 00:08:17.450 "vs": { 00:08:17.450 "nvme_version": "1.3" 00:08:17.450 }, 00:08:17.450 "ns_data": { 00:08:17.450 "id": 1, 00:08:17.450 "can_share": true 00:08:17.450 } 00:08:17.450 } 00:08:17.450 ], 00:08:17.450 "mp_policy": "active_passive" 00:08:17.450 } 00:08:17.450 } 00:08:17.450 ] 00:08:17.450 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1921744 00:08:17.450 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:17.450 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:17.450 Running I/O for 10 seconds... 00:08:18.832 Latency(us) 00:08:18.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.832 Nvme0n1 : 1.00 22394.00 87.48 0.00 0.00 0.00 0.00 0.00 00:08:18.832 =================================================================================================================== 00:08:18.832 Total : 22394.00 87.48 0.00 0.00 0.00 0.00 0.00 00:08:18.832 00:08:19.401 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2b00988f-dc05-4fea-9569-39630d373e5f 00:08:19.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.661 Nvme0n1 : 2.00 22372.50 87.39 0.00 0.00 0.00 0.00 0.00 00:08:19.661 =================================================================================================================== 00:08:19.661 Total : 22372.50 87.39 0.00 0.00 0.00 0.00 0.00 00:08:19.661 00:08:19.661 true 00:08:19.661 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b00988f-dc05-4fea-9569-39630d373e5f 00:08:19.661 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:19.921 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:19.921 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:19.921 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1921744 00:08:20.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.491 Nvme0n1 : 3.00 22350.00 87.30 0.00 0.00 0.00 0.00 0.00 00:08:20.491 =================================================================================================================== 00:08:20.491 Total : 22350.00 87.30 0.00 0.00 0.00 0.00 0.00 00:08:20.491 00:08:21.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.873 Nvme0n1 : 4.00 22385.50 87.44 0.00 0.00 0.00 0.00 0.00 00:08:21.873 =================================================================================================================== 00:08:21.873 Total : 22385.50 87.44 0.00 0.00 0.00 0.00 0.00 00:08:21.873 00:08:22.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.812 Nvme0n1 : 5.00 22446.20 87.68 0.00 0.00 0.00 0.00 0.00 00:08:22.812 =================================================================================================================== 00:08:22.812 Total : 22446.20 87.68 0.00 0.00 0.00 0.00 0.00 00:08:22.812 00:08:23.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.752 Nvme0n1 : 6.00 22476.17 87.80 0.00 0.00 0.00 0.00 0.00 00:08:23.752 =================================================================================================================== 00:08:23.752 Total : 22476.17 87.80 0.00 0.00 0.00 0.00 0.00 00:08:23.752 00:08:24.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.691 Nvme0n1 : 7.00 22542.71 88.06 0.00 0.00 0.00 0.00 0.00 00:08:24.691 =================================================================================================================== 00:08:24.691 Total : 22542.71 88.06 0.00 0.00 0.00 0.00 0.00 00:08:24.691 00:08:25.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.667 Nvme0n1 : 8.00 22565.75 88.15 0.00 0.00 0.00 0.00 0.00 00:08:25.667 =================================================================================================================== 00:08:25.667 Total : 22565.75 88.15 0.00 0.00 0.00 0.00 0.00 00:08:25.667 00:08:26.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.608 Nvme0n1 : 9.00 22606.67 88.31 0.00 0.00 0.00 0.00 0.00 00:08:26.608 =================================================================================================================== 00:08:26.608 Total : 22606.67 88.31 0.00 0.00 0.00 0.00 0.00 00:08:26.608 00:08:27.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.548 Nvme0n1 : 10.00 22623.50 88.37 0.00 0.00 0.00 0.00 0.00 00:08:27.548 =================================================================================================================== 00:08:27.549 Total : 22623.50 88.37 0.00 0.00 0.00 0.00 0.00 00:08:27.549 00:08:27.549 00:08:27.549 Latency(us) 00:08:27.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.549 Nvme0n1 : 10.01 22623.33 88.37 0.00 0.00 5653.78 2678.43 30773.43 00:08:27.549 =================================================================================================================== 00:08:27.549 Total : 22623.33 88.37 0.00 0.00 5653.78 2678.43 30773.43 00:08:27.549 0 00:08:27.549 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1921504 00:08:27.549 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1921504 ']' 00:08:27.549 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1921504 00:08:27.549 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:27.549 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.549 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1921504 00:08:27.549 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:27.549 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:27.549 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1921504' 00:08:27.549 killing process with pid 1921504 00:08:27.549 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1921504 00:08:27.549 Received shutdown signal, test time was about 10.000000 seconds 00:08:27.549 00:08:27.549 Latency(us) 00:08:27.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.549 =================================================================================================================== 00:08:27.549 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:27.549 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1921504 00:08:27.809 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.069 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b00988f-dc05-4fea-9569-39630d373e5f 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1918294 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1918294 00:08:28.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1918294 Killed "${NVMF_APP[@]}" "$@" 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1923591 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1923591 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1923591 ']' 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.330 19:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.590 [2024-07-24 19:44:19.941623] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:08:28.590 [2024-07-24 19:44:19.941670] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.590 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.590 [2024-07-24 19:44:19.999023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.590 [2024-07-24 19:44:20.091474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.590 [2024-07-24 19:44:20.091509] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.590 [2024-07-24 19:44:20.091516] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.590 [2024-07-24 19:44:20.091523] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.590 [2024-07-24 19:44:20.091528] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.590 [2024-07-24 19:44:20.091544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.160 19:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.160 19:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:29.160 19:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.160 19:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.160 19:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:29.419 19:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.419 19:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:29.419 [2024-07-24 19:44:20.946036] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:29.419 [2024-07-24 19:44:20.946130] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:29.419 [2024-07-24 19:44:20.946154] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:29.419 19:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:29.419 19:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 60a1452b-b1c9-4961-ad44-cfdee2b5aff2 00:08:29.419 19:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=60a1452b-b1c9-4961-ad44-cfdee2b5aff2 00:08:29.419 19:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:29.419 19:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:29.420 19:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:29.420 19:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:29.420 19:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:29.679 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 60a1452b-b1c9-4961-ad44-cfdee2b5aff2 -t 2000 00:08:29.939 [ 00:08:29.939 { 00:08:29.939 "name": "60a1452b-b1c9-4961-ad44-cfdee2b5aff2", 00:08:29.939 "aliases": [ 00:08:29.939 "lvs/lvol" 00:08:29.939 ], 00:08:29.939 "product_name": "Logical Volume", 00:08:29.939 "block_size": 4096, 00:08:29.939 "num_blocks": 38912, 00:08:29.939 "uuid": "60a1452b-b1c9-4961-ad44-cfdee2b5aff2", 00:08:29.939 "assigned_rate_limits": { 00:08:29.939 "rw_ios_per_sec": 0, 00:08:29.939 "rw_mbytes_per_sec": 0, 00:08:29.939 "r_mbytes_per_sec": 0, 00:08:29.939 "w_mbytes_per_sec": 0 00:08:29.939 }, 00:08:29.939 "claimed": false, 00:08:29.939 "zoned": false, 00:08:29.939 "supported_io_types": { 00:08:29.939 "read": true, 00:08:29.939 "write": true, 00:08:29.939 "unmap": true, 00:08:29.939 "flush": false, 00:08:29.939 "reset": true, 00:08:29.939 "nvme_admin": false, 00:08:29.939 "nvme_io": false, 00:08:29.939 "nvme_io_md": false, 00:08:29.939 "write_zeroes": true, 00:08:29.939 "zcopy": false, 00:08:29.939 "get_zone_info": false, 00:08:29.939 "zone_management": false, 00:08:29.939 "zone_append": false, 00:08:29.939 "compare": false, 00:08:29.939 "compare_and_write": false, 00:08:29.939 "abort": false, 00:08:29.939 "seek_hole": true, 00:08:29.939 "seek_data": true, 00:08:29.939 "copy": false, 00:08:29.939 "nvme_iov_md": false 00:08:29.939 }, 00:08:29.939 "driver_specific": { 00:08:29.939 "lvol": { 00:08:29.939 "lvol_store_uuid": "2b00988f-dc05-4fea-9569-39630d373e5f", 00:08:29.939 "base_bdev": "aio_bdev", 00:08:29.939 "thin_provision": false, 00:08:29.939 "num_allocated_clusters": 38, 00:08:29.939 "snapshot": false, 00:08:29.939 "clone": false, 00:08:29.939 "esnap_clone": false 00:08:29.939 } 00:08:29.939 } 00:08:29.939 } 00:08:29.939 ] 00:08:29.939 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:29.939 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b00988f-dc05-4fea-9569-39630d373e5f 00:08:29.940 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:29.940 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:29.940 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b00988f-dc05-4fea-9569-39630d373e5f 00:08:29.940 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:30.200 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:30.200 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:30.460 [2024-07-24 19:44:21.822788] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:30.460 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b00988f-dc05-4fea-9569-39630d373e5f 00:08:30.460 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:30.460 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b00988f-dc05-4fea-9569-39630d373e5f 00:08:30.460 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.460 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.460 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.460 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.460 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.460 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.460 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.460 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:30.460 19:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b00988f-dc05-4fea-9569-39630d373e5f 00:08:30.460 request: 00:08:30.460 { 00:08:30.460 "uuid": "2b00988f-dc05-4fea-9569-39630d373e5f", 00:08:30.460 "method": "bdev_lvol_get_lvstores", 00:08:30.460 "req_id": 1 00:08:30.460 } 00:08:30.460 Got JSON-RPC error response 00:08:30.460 response: 00:08:30.460 { 00:08:30.460 "code": -19, 00:08:30.460 "message": "No such device" 00:08:30.460 } 00:08:30.721 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:30.721 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:30.721 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:30.721 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:30.721 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:30.721 aio_bdev 00:08:30.721 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 60a1452b-b1c9-4961-ad44-cfdee2b5aff2 00:08:30.721 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=60a1452b-b1c9-4961-ad44-cfdee2b5aff2 00:08:30.721 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.721 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:30.721 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.721 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.721 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:30.981 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 60a1452b-b1c9-4961-ad44-cfdee2b5aff2 -t 2000 00:08:30.981 [ 00:08:30.981 { 00:08:30.981 "name": "60a1452b-b1c9-4961-ad44-cfdee2b5aff2", 00:08:30.981 "aliases": [ 00:08:30.981 "lvs/lvol" 00:08:30.981 ], 00:08:30.981 "product_name": "Logical Volume", 00:08:30.981 "block_size": 4096, 00:08:30.981 "num_blocks": 38912, 00:08:30.981 "uuid": "60a1452b-b1c9-4961-ad44-cfdee2b5aff2", 00:08:30.981 "assigned_rate_limits": { 00:08:30.981 "rw_ios_per_sec": 0, 00:08:30.981 "rw_mbytes_per_sec": 0, 00:08:30.981 "r_mbytes_per_sec": 0, 00:08:30.981 "w_mbytes_per_sec": 0 00:08:30.981 }, 00:08:30.981 "claimed": false, 00:08:30.981 "zoned": false, 00:08:30.981 "supported_io_types": { 00:08:30.981 "read": true, 00:08:30.981 "write": true, 00:08:30.981 "unmap": true, 00:08:30.981 "flush": false, 00:08:30.981 "reset": true, 00:08:30.981 "nvme_admin": false, 00:08:30.981 "nvme_io": false, 00:08:30.981 "nvme_io_md": false, 00:08:30.981 "write_zeroes": true, 00:08:30.981 "zcopy": false, 00:08:30.981 "get_zone_info": false, 00:08:30.981 "zone_management": false, 00:08:30.981 "zone_append": false, 00:08:30.981 "compare": false, 00:08:30.981 "compare_and_write": false, 00:08:30.981 "abort": false, 00:08:30.981 "seek_hole": true, 00:08:30.981 "seek_data": true, 00:08:30.981 "copy": false, 00:08:30.981 "nvme_iov_md": false 00:08:30.981 }, 00:08:30.981 "driver_specific": { 00:08:30.981 "lvol": { 00:08:30.981 "lvol_store_uuid": "2b00988f-dc05-4fea-9569-39630d373e5f", 00:08:30.981 "base_bdev": "aio_bdev", 00:08:30.981 "thin_provision": false, 00:08:30.981 "num_allocated_clusters": 38, 00:08:30.981 "snapshot": false, 00:08:30.981 "clone": false, 00:08:30.981 "esnap_clone": false 00:08:30.981 } 00:08:30.981 } 00:08:30.981 } 00:08:30.981 ] 00:08:30.981 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:30.981 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b00988f-dc05-4fea-9569-39630d373e5f 00:08:30.981 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:31.241 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:31.241 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b00988f-dc05-4fea-9569-39630d373e5f 00:08:31.241 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:31.500 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:31.500 19:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 60a1452b-b1c9-4961-ad44-cfdee2b5aff2 00:08:31.500 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2b00988f-dc05-4fea-9569-39630d373e5f 00:08:31.759 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.019 00:08:32.019 real 0m17.677s 00:08:32.019 user 0m45.090s 00:08:32.019 sys 0m3.908s 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:32.019 ************************************ 00:08:32.019 END TEST lvs_grow_dirty 00:08:32.019 ************************************ 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:32.019 nvmf_trace.0 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:32.019 rmmod nvme_tcp 00:08:32.019 rmmod nvme_fabrics 00:08:32.019 rmmod nvme_keyring 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1923591 ']' 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1923591 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1923591 ']' 00:08:32.019 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1923591 00:08:32.279 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:32.279 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:32.279 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1923591 00:08:32.279 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:32.279 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:32.279 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1923591' 00:08:32.279 killing process with pid 1923591 00:08:32.279 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1923591 00:08:32.279 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1923591 00:08:32.279 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:32.279 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:32.279 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:32.279 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:32.279 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:32.279 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.279 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.280 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.821 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:34.821 00:08:34.821 real 0m42.430s 00:08:34.821 user 1m6.229s 00:08:34.821 sys 0m9.756s 00:08:34.821 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.821 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:34.821 ************************************ 00:08:34.821 END TEST nvmf_lvs_grow 00:08:34.821 ************************************ 00:08:34.821 19:44:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:34.821 19:44:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:34.821 19:44:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.821 19:44:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.821 ************************************ 00:08:34.821 START TEST nvmf_bdev_io_wait 00:08:34.821 ************************************ 00:08:34.821 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:34.821 * Looking for test storage... 00:08:34.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:08:34.821 19:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:40.105 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:40.105 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:40.105 Found net devices under 0000:86:00.0: cvl_0_0 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:40.105 Found net devices under 0000:86:00.1: cvl_0_1 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.105 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:40.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:08:40.106 00:08:40.106 --- 10.0.0.2 ping statistics --- 00:08:40.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.106 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:08:40.106 00:08:40.106 --- 10.0.0.1 ping statistics --- 00:08:40.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.106 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1927689 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1927689 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1927689 ']' 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.106 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.106 [2024-07-24 19:44:31.594843] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:08:40.106 [2024-07-24 19:44:31.594889] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.106 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.106 [2024-07-24 19:44:31.651906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.366 [2024-07-24 19:44:31.735001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.366 [2024-07-24 19:44:31.735037] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.366 [2024-07-24 19:44:31.735050] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.366 [2024-07-24 19:44:31.735056] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.366 [2024-07-24 19:44:31.735061] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.366 [2024-07-24 19:44:31.735107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.366 [2024-07-24 19:44:31.735201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.366 [2024-07-24 19:44:31.735285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.366 [2024-07-24 19:44:31.735286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.936 [2024-07-24 19:44:32.505520] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.936 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.196 Malloc0 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.196 [2024-07-24 19:44:32.570661] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1927891 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1927893 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:41.196 { 00:08:41.196 "params": { 00:08:41.196 "name": "Nvme$subsystem", 00:08:41.196 "trtype": "$TEST_TRANSPORT", 00:08:41.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.196 "adrfam": "ipv4", 00:08:41.196 "trsvcid": "$NVMF_PORT", 00:08:41.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.196 "hdgst": ${hdgst:-false}, 00:08:41.196 "ddgst": ${ddgst:-false} 00:08:41.196 }, 00:08:41.196 "method": "bdev_nvme_attach_controller" 00:08:41.196 } 00:08:41.196 EOF 00:08:41.196 )") 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1927895 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:41.196 { 00:08:41.196 "params": { 00:08:41.196 "name": "Nvme$subsystem", 00:08:41.196 "trtype": "$TEST_TRANSPORT", 00:08:41.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.196 "adrfam": "ipv4", 00:08:41.196 "trsvcid": "$NVMF_PORT", 00:08:41.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.196 "hdgst": ${hdgst:-false}, 00:08:41.196 "ddgst": ${ddgst:-false} 00:08:41.196 }, 00:08:41.196 "method": "bdev_nvme_attach_controller" 00:08:41.196 } 00:08:41.196 EOF 00:08:41.196 )") 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1927898 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:41.196 { 00:08:41.196 "params": { 00:08:41.196 "name": "Nvme$subsystem", 00:08:41.196 "trtype": "$TEST_TRANSPORT", 00:08:41.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.196 "adrfam": "ipv4", 00:08:41.196 "trsvcid": "$NVMF_PORT", 00:08:41.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.196 "hdgst": ${hdgst:-false}, 00:08:41.196 "ddgst": ${ddgst:-false} 00:08:41.196 }, 00:08:41.196 "method": "bdev_nvme_attach_controller" 00:08:41.196 } 00:08:41.196 EOF 00:08:41.196 )") 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:41.196 { 00:08:41.196 "params": { 00:08:41.196 "name": "Nvme$subsystem", 00:08:41.196 "trtype": "$TEST_TRANSPORT", 00:08:41.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.196 "adrfam": "ipv4", 00:08:41.196 "trsvcid": "$NVMF_PORT", 00:08:41.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.196 "hdgst": ${hdgst:-false}, 00:08:41.196 "ddgst": ${ddgst:-false} 00:08:41.196 }, 00:08:41.196 "method": "bdev_nvme_attach_controller" 00:08:41.196 } 00:08:41.196 EOF 00:08:41.196 )") 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1927891 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:41.196 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:41.196 "params": { 00:08:41.196 "name": "Nvme1", 00:08:41.196 "trtype": "tcp", 00:08:41.196 "traddr": "10.0.0.2", 00:08:41.196 "adrfam": "ipv4", 00:08:41.197 "trsvcid": "4420", 00:08:41.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.197 "hdgst": false, 00:08:41.197 "ddgst": false 00:08:41.197 }, 00:08:41.197 "method": "bdev_nvme_attach_controller" 00:08:41.197 }' 00:08:41.197 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:41.197 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:41.197 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:41.197 "params": { 00:08:41.197 "name": "Nvme1", 00:08:41.197 "trtype": "tcp", 00:08:41.197 "traddr": "10.0.0.2", 00:08:41.197 "adrfam": "ipv4", 00:08:41.197 "trsvcid": "4420", 00:08:41.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.197 "hdgst": false, 00:08:41.197 "ddgst": false 00:08:41.197 }, 00:08:41.197 "method": "bdev_nvme_attach_controller" 00:08:41.197 }' 00:08:41.197 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:41.197 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:41.197 "params": { 00:08:41.197 "name": "Nvme1", 00:08:41.197 "trtype": "tcp", 00:08:41.197 "traddr": "10.0.0.2", 00:08:41.197 "adrfam": "ipv4", 00:08:41.197 "trsvcid": "4420", 00:08:41.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.197 "hdgst": false, 00:08:41.197 "ddgst": false 00:08:41.197 }, 00:08:41.197 "method": "bdev_nvme_attach_controller" 00:08:41.197 }' 00:08:41.197 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:41.197 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:41.197 "params": { 00:08:41.197 "name": "Nvme1", 00:08:41.197 "trtype": "tcp", 00:08:41.197 "traddr": "10.0.0.2", 00:08:41.197 "adrfam": "ipv4", 00:08:41.197 "trsvcid": "4420", 00:08:41.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.197 "hdgst": false, 00:08:41.197 "ddgst": false 00:08:41.197 }, 00:08:41.197 "method": "bdev_nvme_attach_controller" 00:08:41.197 }' 00:08:41.197 [2024-07-24 19:44:32.622573] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:08:41.197 [2024-07-24 19:44:32.622575] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:08:41.197 [2024-07-24 19:44:32.622573] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:08:41.197 [2024-07-24 19:44:32.622623] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 19:44:32.622624] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 19:44:32.622624] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:41.197 --proc-type=auto ] 00:08:41.197 --proc-type=auto ] 00:08:41.197 [2024-07-24 19:44:32.625939] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:08:41.197 [2024-07-24 19:44:32.625985] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:41.197 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.197 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.457 [2024-07-24 19:44:32.811561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.457 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.457 [2024-07-24 19:44:32.887546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:41.457 [2024-07-24 19:44:32.903229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.457 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.457 [2024-07-24 19:44:32.981407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:41.457 [2024-07-24 19:44:33.003330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.717 [2024-07-24 19:44:33.062548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.717 [2024-07-24 19:44:33.084325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:41.717 [2024-07-24 19:44:33.138573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:41.717 Running I/O for 1 seconds... 00:08:41.717 Running I/O for 1 seconds... 00:08:41.717 Running I/O for 1 seconds... 00:08:41.989 Running I/O for 1 seconds... 00:08:42.935 00:08:42.935 Latency(us) 00:08:42.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.935 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:42.935 Nvme1n1 : 1.02 7055.23 27.56 0.00 0.00 17999.96 3362.28 20971.52 00:08:42.935 =================================================================================================================== 00:08:42.935 Total : 7055.23 27.56 0.00 0.00 17999.96 3362.28 20971.52 00:08:42.935 00:08:42.935 Latency(us) 00:08:42.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.935 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:42.935 Nvme1n1 : 1.01 13080.09 51.09 0.00 0.00 9732.21 4616.01 20515.62 00:08:42.935 =================================================================================================================== 00:08:42.935 Total : 13080.09 51.09 0.00 0.00 9732.21 4616.01 20515.62 00:08:42.935 00:08:42.935 Latency(us) 00:08:42.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.935 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:42.935 Nvme1n1 : 1.00 7329.62 28.63 0.00 0.00 17426.54 4131.62 36244.26 00:08:42.935 =================================================================================================================== 00:08:42.935 Total : 7329.62 28.63 0.00 0.00 17426.54 4131.62 36244.26 00:08:42.935 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1927893 00:08:42.935 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1927895 00:08:42.935 00:08:42.935 Latency(us) 00:08:42.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.935 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:42.935 Nvme1n1 : 1.00 245508.49 959.02 0.00 0.00 519.95 212.81 648.24 00:08:42.935 =================================================================================================================== 00:08:42.935 Total : 245508.49 959.02 0.00 0.00 519.95 212.81 648.24 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1927898 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:43.196 rmmod nvme_tcp 00:08:43.196 rmmod nvme_fabrics 00:08:43.196 rmmod nvme_keyring 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1927689 ']' 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1927689 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1927689 ']' 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1927689 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1927689 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1927689' 00:08:43.196 killing process with pid 1927689 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1927689 00:08:43.196 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1927689 00:08:43.456 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:43.456 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:43.456 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:43.456 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:43.456 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:43.456 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.456 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.456 19:44:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.997 19:44:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:45.997 00:08:45.997 real 0m11.011s 00:08:45.997 user 0m19.706s 00:08:45.997 sys 0m5.734s 00:08:45.997 19:44:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.997 19:44:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.997 ************************************ 00:08:45.997 END TEST nvmf_bdev_io_wait 00:08:45.997 ************************************ 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.997 ************************************ 00:08:45.997 START TEST nvmf_queue_depth 00:08:45.997 ************************************ 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:45.997 * Looking for test storage... 00:08:45.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:08:45.997 19:44:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.278 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.278 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:08:51.278 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:51.278 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:51.279 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:51.279 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:51.279 Found net devices under 0000:86:00.0: cvl_0_0 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:51.279 Found net devices under 0000:86:00.1: cvl_0_1 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:51.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:08:51.279 00:08:51.279 --- 10.0.0.2 ping statistics --- 00:08:51.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.279 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:08:51.279 00:08:51.279 --- 10.0.0.1 ping statistics --- 00:08:51.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.279 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.279 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1931789 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1931789 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1931789 ']' 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.280 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.280 [2024-07-24 19:44:42.763136] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:08:51.280 [2024-07-24 19:44:42.763182] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.280 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.280 [2024-07-24 19:44:42.821003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.540 [2024-07-24 19:44:42.895871] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.540 [2024-07-24 19:44:42.895908] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.540 [2024-07-24 19:44:42.895915] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.540 [2024-07-24 19:44:42.895921] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.540 [2024-07-24 19:44:42.895926] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.540 [2024-07-24 19:44:42.895944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.149 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.149 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:52.149 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:52.149 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:52.149 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.149 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.149 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:52.149 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.149 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.149 [2024-07-24 19:44:43.603876] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.149 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.149 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:52.149 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.149 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.149 Malloc0 00:08:52.149 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.149 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:52.149 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.150 [2024-07-24 19:44:43.660607] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1931927 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1931927 /var/tmp/bdevperf.sock 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1931927 ']' 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:52.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.150 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.150 [2024-07-24 19:44:43.711190] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:08:52.150 [2024-07-24 19:44:43.711235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1931927 ] 00:08:52.150 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.410 [2024-07-24 19:44:43.765775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.410 [2024-07-24 19:44:43.840056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.979 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.979 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:52.979 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:52.979 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.979 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:53.238 NVMe0n1 00:08:53.238 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.238 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:53.238 Running I/O for 10 seconds... 00:09:03.226 00:09:03.226 Latency(us) 00:09:03.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.226 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:03.226 Verification LBA range: start 0x0 length 0x4000 00:09:03.226 NVMe0n1 : 10.08 11988.03 46.83 0.00 0.00 85138.70 20173.69 104401.70 00:09:03.226 =================================================================================================================== 00:09:03.226 Total : 11988.03 46.83 0.00 0.00 85138.70 20173.69 104401.70 00:09:03.226 0 00:09:03.226 19:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1931927 00:09:03.226 19:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1931927 ']' 00:09:03.226 19:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1931927 00:09:03.226 19:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:03.487 19:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:03.487 19:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1931927 00:09:03.487 19:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:03.487 19:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:03.487 19:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1931927' 00:09:03.487 killing process with pid 1931927 00:09:03.487 19:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1931927 00:09:03.487 Received shutdown signal, test time was about 10.000000 seconds 00:09:03.487 00:09:03.487 Latency(us) 00:09:03.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.487 =================================================================================================================== 00:09:03.487 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:03.487 19:44:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1931927 00:09:03.487 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:03.487 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:03.487 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:03.487 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:03.487 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.487 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:03.487 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.487 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.487 rmmod nvme_tcp 00:09:03.487 rmmod nvme_fabrics 00:09:03.747 rmmod nvme_keyring 00:09:03.747 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.747 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:03.747 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:03.747 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1931789 ']' 00:09:03.747 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1931789 00:09:03.747 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1931789 ']' 00:09:03.747 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1931789 00:09:03.747 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:03.747 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:03.747 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1931789 00:09:03.747 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:03.747 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:03.747 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1931789' 00:09:03.747 killing process with pid 1931789 00:09:03.747 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1931789 00:09:03.747 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1931789 00:09:04.006 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:04.006 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:04.006 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:04.006 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:04.006 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:04.006 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.006 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.006 19:44:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.939 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:05.939 00:09:05.939 real 0m20.366s 00:09:05.939 user 0m24.782s 00:09:05.939 sys 0m5.758s 00:09:05.939 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.939 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:05.939 ************************************ 00:09:05.939 END TEST nvmf_queue_depth 00:09:05.939 ************************************ 00:09:05.940 19:44:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:05.940 19:44:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:05.940 19:44:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.940 19:44:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.940 ************************************ 00:09:05.940 START TEST nvmf_target_multipath 00:09:05.940 ************************************ 00:09:05.940 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:06.200 * Looking for test storage... 00:09:06.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.200 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.200 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:06.200 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.200 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.200 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.200 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.200 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.200 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.200 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.200 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.200 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.200 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.200 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:06.200 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:06.201 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:11.484 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:11.484 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:11.484 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:11.485 Found net devices under 0000:86:00.0: cvl_0_0 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:11.485 Found net devices under 0000:86:00.1: cvl_0_1 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.485 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:11.485 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:11.485 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:11.485 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:11.485 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:11.745 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:11.745 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:11.745 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:11.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:09:11.745 00:09:11.745 --- 10.0.0.2 ping statistics --- 00:09:11.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.745 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:09:11.745 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:11.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:09:11.745 00:09:11.745 --- 10.0.0.1 ping statistics --- 00:09:11.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.745 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:09:11.745 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:11.746 only one NIC for nvmf test 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:11.746 rmmod nvme_tcp 00:09:11.746 rmmod nvme_fabrics 00:09:11.746 rmmod nvme_keyring 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.746 19:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.654 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:13.914 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:13.914 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:13.914 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:13.914 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:13.914 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:13.914 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:13.914 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:13.914 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:13.914 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:13.914 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:13.914 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:13.914 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:13.915 00:09:13.915 real 0m7.784s 00:09:13.915 user 0m1.594s 00:09:13.915 sys 0m4.155s 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:13.915 ************************************ 00:09:13.915 END TEST nvmf_target_multipath 00:09:13.915 ************************************ 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:13.915 ************************************ 00:09:13.915 START TEST nvmf_zcopy 00:09:13.915 ************************************ 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:13.915 * Looking for test storage... 00:09:13.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:13.915 19:45:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.197 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:19.197 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:19.197 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:19.197 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:19.197 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:19.198 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:19.198 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:19.198 Found net devices under 0000:86:00.0: cvl_0_0 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:19.198 Found net devices under 0000:86:00.1: cvl_0_1 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:19.198 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:19.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:09:19.461 00:09:19.461 --- 10.0.0.2 ping statistics --- 00:09:19.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.461 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:19.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:09:19.461 00:09:19.461 --- 10.0.0.1 ping statistics --- 00:09:19.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.461 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1941311 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1941311 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1941311 ']' 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.461 19:45:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.461 [2024-07-24 19:45:10.937961] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:09:19.461 [2024-07-24 19:45:10.938004] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.461 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.461 [2024-07-24 19:45:10.994978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.720 [2024-07-24 19:45:11.075441] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.720 [2024-07-24 19:45:11.075473] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.720 [2024-07-24 19:45:11.075481] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.720 [2024-07-24 19:45:11.075488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.720 [2024-07-24 19:45:11.075493] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.720 [2024-07-24 19:45:11.075510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.290 [2024-07-24 19:45:11.782406] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.290 [2024-07-24 19:45:11.802556] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.290 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.291 malloc0 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:20.291 { 00:09:20.291 "params": { 00:09:20.291 "name": "Nvme$subsystem", 00:09:20.291 "trtype": "$TEST_TRANSPORT", 00:09:20.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:20.291 "adrfam": "ipv4", 00:09:20.291 "trsvcid": "$NVMF_PORT", 00:09:20.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:20.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:20.291 "hdgst": ${hdgst:-false}, 00:09:20.291 "ddgst": ${ddgst:-false} 00:09:20.291 }, 00:09:20.291 "method": "bdev_nvme_attach_controller" 00:09:20.291 } 00:09:20.291 EOF 00:09:20.291 )") 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:20.291 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:20.291 "params": { 00:09:20.291 "name": "Nvme1", 00:09:20.291 "trtype": "tcp", 00:09:20.291 "traddr": "10.0.0.2", 00:09:20.291 "adrfam": "ipv4", 00:09:20.291 "trsvcid": "4420", 00:09:20.291 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:20.291 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:20.291 "hdgst": false, 00:09:20.291 "ddgst": false 00:09:20.291 }, 00:09:20.291 "method": "bdev_nvme_attach_controller" 00:09:20.291 }' 00:09:20.551 [2024-07-24 19:45:11.897512] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:09:20.551 [2024-07-24 19:45:11.897555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1941475 ] 00:09:20.551 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.551 [2024-07-24 19:45:11.951413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.551 [2024-07-24 19:45:12.025184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.810 Running I/O for 10 seconds... 00:09:30.798 00:09:30.798 Latency(us) 00:09:30.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.798 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:30.798 Verification LBA range: start 0x0 length 0x1000 00:09:30.798 Nvme1n1 : 10.02 7754.74 60.58 0.00 0.00 16461.97 1937.59 44906.41 00:09:30.798 =================================================================================================================== 00:09:30.798 Total : 7754.74 60.58 0.00 0.00 16461.97 1937.59 44906.41 00:09:31.059 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1943182 00:09:31.059 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:31.059 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.059 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:31.059 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:31.059 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:31.059 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:31.059 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:31.059 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:31.059 { 00:09:31.059 "params": { 00:09:31.059 "name": "Nvme$subsystem", 00:09:31.059 "trtype": "$TEST_TRANSPORT", 00:09:31.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.059 "adrfam": "ipv4", 00:09:31.059 "trsvcid": "$NVMF_PORT", 00:09:31.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.059 "hdgst": ${hdgst:-false}, 00:09:31.059 "ddgst": ${ddgst:-false} 00:09:31.059 }, 00:09:31.059 "method": "bdev_nvme_attach_controller" 00:09:31.059 } 00:09:31.059 EOF 00:09:31.059 )") 00:09:31.059 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:31.059 [2024-07-24 19:45:22.537960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.059 [2024-07-24 19:45:22.537992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.059 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:31.059 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:31.059 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:31.059 "params": { 00:09:31.059 "name": "Nvme1", 00:09:31.059 "trtype": "tcp", 00:09:31.059 "traddr": "10.0.0.2", 00:09:31.059 "adrfam": "ipv4", 00:09:31.059 "trsvcid": "4420", 00:09:31.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:31.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:31.059 "hdgst": false, 00:09:31.059 "ddgst": false 00:09:31.059 }, 00:09:31.059 "method": "bdev_nvme_attach_controller" 00:09:31.059 }' 00:09:31.059 [2024-07-24 19:45:22.549960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.059 [2024-07-24 19:45:22.549973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.059 [2024-07-24 19:45:22.557978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.059 [2024-07-24 19:45:22.557988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.059 [2024-07-24 19:45:22.565999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.059 [2024-07-24 19:45:22.566008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.059 [2024-07-24 19:45:22.573657] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:09:31.059 [2024-07-24 19:45:22.573701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1943182 ] 00:09:31.059 [2024-07-24 19:45:22.578031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.059 [2024-07-24 19:45:22.578040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.059 [2024-07-24 19:45:22.590069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.059 [2024-07-24 19:45:22.590079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.059 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.059 [2024-07-24 19:45:22.602100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.059 [2024-07-24 19:45:22.602109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.059 [2024-07-24 19:45:22.614135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.059 [2024-07-24 19:45:22.614144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.059 [2024-07-24 19:45:22.626166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.059 [2024-07-24 19:45:22.626175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.059 [2024-07-24 19:45:22.627366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.059 [2024-07-24 19:45:22.638202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.059 [2024-07-24 19:45:22.638214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.059 [2024-07-24 19:45:22.650231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.059 [2024-07-24 19:45:22.650241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.318 [2024-07-24 19:45:22.662262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.318 [2024-07-24 19:45:22.662274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.318 [2024-07-24 19:45:22.674297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.318 [2024-07-24 19:45:22.674318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.318 [2024-07-24 19:45:22.686326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.318 [2024-07-24 19:45:22.686336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.318 [2024-07-24 19:45:22.698359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.318 [2024-07-24 19:45:22.698369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.318 [2024-07-24 19:45:22.704621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.318 [2024-07-24 19:45:22.710394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.318 [2024-07-24 19:45:22.710405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.318 [2024-07-24 19:45:22.722434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.318 [2024-07-24 19:45:22.722454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.318 [2024-07-24 19:45:22.734461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.318 [2024-07-24 19:45:22.734473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.318 [2024-07-24 19:45:22.746488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.318 [2024-07-24 19:45:22.746499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.319 [2024-07-24 19:45:22.758522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.319 [2024-07-24 19:45:22.758532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.319 [2024-07-24 19:45:22.770555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.319 [2024-07-24 19:45:22.770565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.319 [2024-07-24 19:45:22.782584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.319 [2024-07-24 19:45:22.782593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.319 [2024-07-24 19:45:22.794634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.319 [2024-07-24 19:45:22.794654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.319 [2024-07-24 19:45:22.806662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.319 [2024-07-24 19:45:22.806676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.319 [2024-07-24 19:45:22.818694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.319 [2024-07-24 19:45:22.818708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.319 [2024-07-24 19:45:22.830724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.319 [2024-07-24 19:45:22.830733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.319 [2024-07-24 19:45:22.842755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.319 [2024-07-24 19:45:22.842764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.319 [2024-07-24 19:45:22.854791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.319 [2024-07-24 19:45:22.854805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.319 [2024-07-24 19:45:22.866828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.319 [2024-07-24 19:45:22.866842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.319 [2024-07-24 19:45:22.878860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.319 [2024-07-24 19:45:22.878873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.319 [2024-07-24 19:45:22.891190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.319 [2024-07-24 19:45:22.891207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.319 Running I/O for 5 seconds... 00:09:31.319 [2024-07-24 19:45:22.902930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.319 [2024-07-24 19:45:22.902941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:22.929168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:22.929187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:22.943825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:22.943844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:22.958034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:22.958062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:22.972478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:22.972498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:22.988853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:22.988872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:22.999679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:22.999698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:23.009509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:23.009529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:23.019325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:23.019345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:23.035303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:23.035322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:23.052551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:23.052570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:23.062150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:23.062168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:23.077394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:23.077412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:23.087560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:23.087578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:23.102584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:23.102602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:23.117574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:23.117600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:23.126005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:23.126024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:23.139829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:23.139847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:23.152608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:23.152626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:23.161113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:23.161131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.579 [2024-07-24 19:45:23.175870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.579 [2024-07-24 19:45:23.175889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.839 [2024-07-24 19:45:23.186914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.839 [2024-07-24 19:45:23.186933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.839 [2024-07-24 19:45:23.201304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.839 [2024-07-24 19:45:23.201322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.839 [2024-07-24 19:45:23.212367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.839 [2024-07-24 19:45:23.212384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.839 [2024-07-24 19:45:23.226011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.839 [2024-07-24 19:45:23.226029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.839 [2024-07-24 19:45:23.241808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.839 [2024-07-24 19:45:23.241826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.839 [2024-07-24 19:45:23.255542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.839 [2024-07-24 19:45:23.255560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.839 [2024-07-24 19:45:23.266034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.839 [2024-07-24 19:45:23.266055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.839 [2024-07-24 19:45:23.277561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.839 [2024-07-24 19:45:23.277579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.839 [2024-07-24 19:45:23.291390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.839 [2024-07-24 19:45:23.291408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.839 [2024-07-24 19:45:23.305682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.839 [2024-07-24 19:45:23.305700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.839 [2024-07-24 19:45:23.321411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.839 [2024-07-24 19:45:23.321428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.839 [2024-07-24 19:45:23.329458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.839 [2024-07-24 19:45:23.329475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.839 [2024-07-24 19:45:23.338873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.839 [2024-07-24 19:45:23.338890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.840 [2024-07-24 19:45:23.352945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.840 [2024-07-24 19:45:23.352963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.840 [2024-07-24 19:45:23.365772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.840 [2024-07-24 19:45:23.365789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.840 [2024-07-24 19:45:23.380438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.840 [2024-07-24 19:45:23.380456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.840 [2024-07-24 19:45:23.391594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.840 [2024-07-24 19:45:23.391611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.840 [2024-07-24 19:45:23.400126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.840 [2024-07-24 19:45:23.400144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.840 [2024-07-24 19:45:23.408590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.840 [2024-07-24 19:45:23.408607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.840 [2024-07-24 19:45:23.417768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.840 [2024-07-24 19:45:23.417786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.840 [2024-07-24 19:45:23.433772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.840 [2024-07-24 19:45:23.433791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.443865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.443884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.460344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.460363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.472788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.472806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.487664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.487682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.498459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.498477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.507736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.507754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.522141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.522159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.536031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.536056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.550222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.550240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.561182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.561200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.576979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.576997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.587161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.587180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.601598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.601615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.614687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.614721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.627521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.627539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.635832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.635850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.650121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.650140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.658808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.658826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.673425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.673443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.099 [2024-07-24 19:45:23.685722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.099 [2024-07-24 19:45:23.685739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.700552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.700571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.711562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.711580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.720428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.720445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.736407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.736425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.751907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.751925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.764084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.764102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.776889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.776908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.791661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.791678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.807777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.807795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.816148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.816165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.825126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.825144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.839440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.839458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.853401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.853418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.864728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.864746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.873545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.873562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.882227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.882245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.889780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.889797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.905119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.905137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.918777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.918796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.933456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.933475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.940433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.940450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.360 [2024-07-24 19:45:23.949754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.360 [2024-07-24 19:45:23.949772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.619 [2024-07-24 19:45:23.963402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.619 [2024-07-24 19:45:23.963420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.619 [2024-07-24 19:45:23.979313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.619 [2024-07-24 19:45:23.979331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.619 [2024-07-24 19:45:23.988856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.619 [2024-07-24 19:45:23.988873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.619 [2024-07-24 19:45:24.003949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.619 [2024-07-24 19:45:24.003966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.619 [2024-07-24 19:45:24.022091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.619 [2024-07-24 19:45:24.022115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.619 [2024-07-24 19:45:24.036150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.619 [2024-07-24 19:45:24.036169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.619 [2024-07-24 19:45:24.047385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.619 [2024-07-24 19:45:24.047403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.619 [2024-07-24 19:45:24.056080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.619 [2024-07-24 19:45:24.056099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.619 [2024-07-24 19:45:24.064735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.619 [2024-07-24 19:45:24.064754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.620 [2024-07-24 19:45:24.079117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.620 [2024-07-24 19:45:24.079136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.620 [2024-07-24 19:45:24.086211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.620 [2024-07-24 19:45:24.086229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.620 [2024-07-24 19:45:24.096510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.620 [2024-07-24 19:45:24.096528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.620 [2024-07-24 19:45:24.105752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.620 [2024-07-24 19:45:24.105770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.620 [2024-07-24 19:45:24.114445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.620 [2024-07-24 19:45:24.114463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.620 [2024-07-24 19:45:24.123316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.620 [2024-07-24 19:45:24.123335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.620 [2024-07-24 19:45:24.132198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.620 [2024-07-24 19:45:24.132216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.620 [2024-07-24 19:45:24.139859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.620 [2024-07-24 19:45:24.139877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.620 [2024-07-24 19:45:24.151248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.620 [2024-07-24 19:45:24.151266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.620 [2024-07-24 19:45:24.162274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.620 [2024-07-24 19:45:24.162293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.620 [2024-07-24 19:45:24.171422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.620 [2024-07-24 19:45:24.171440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.620 [2024-07-24 19:45:24.180952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.620 [2024-07-24 19:45:24.180971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.620 [2024-07-24 19:45:24.190391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.620 [2024-07-24 19:45:24.190409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.620 [2024-07-24 19:45:24.199189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.620 [2024-07-24 19:45:24.199207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.620 [2024-07-24 19:45:24.208541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.620 [2024-07-24 19:45:24.208559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.620 [2024-07-24 19:45:24.217600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.620 [2024-07-24 19:45:24.217618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.226915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.226937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.235098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.235116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.243738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.243756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.252807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.252824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.261675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.261693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.270180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.270197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.278650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.278667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.287423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.287441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.297177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.297195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.306285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.306303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.315247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.315264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.326320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.326337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.337027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.337051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.345636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.345654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.354510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.354528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.363565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.363583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.373506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.373524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.381505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.381523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.391428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.879 [2024-07-24 19:45:24.391446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.879 [2024-07-24 19:45:24.400787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.880 [2024-07-24 19:45:24.400809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.880 [2024-07-24 19:45:24.408282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.880 [2024-07-24 19:45:24.408299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.880 [2024-07-24 19:45:24.417701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.880 [2024-07-24 19:45:24.417719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.880 [2024-07-24 19:45:24.424695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.880 [2024-07-24 19:45:24.424712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.880 [2024-07-24 19:45:24.435444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.880 [2024-07-24 19:45:24.435461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.880 [2024-07-24 19:45:24.444048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.880 [2024-07-24 19:45:24.444065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.880 [2024-07-24 19:45:24.452595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.880 [2024-07-24 19:45:24.452612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.880 [2024-07-24 19:45:24.461430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.880 [2024-07-24 19:45:24.461447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.880 [2024-07-24 19:45:24.470872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.880 [2024-07-24 19:45:24.470889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.480537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.480556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.487439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.487456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.498887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.498905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.506802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.506820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.516365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.516383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.526389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.526407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.534875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.534892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.543895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.543913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.552691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.552708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.561440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.561457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.609707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.609728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.618149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.618167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.628174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.628191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.637132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.637150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.645753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.645771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.654773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.654791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.663839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.663856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.672712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.672729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.681106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.681124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.688564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.688581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.698527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.698544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.707777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.707794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.716377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.716395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.140 [2024-07-24 19:45:24.727344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.140 [2024-07-24 19:45:24.727362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.400 [2024-07-24 19:45:24.739993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.400 [2024-07-24 19:45:24.740012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.400 [2024-07-24 19:45:24.749373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.400 [2024-07-24 19:45:24.749390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.400 [2024-07-24 19:45:24.757903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.757921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.769503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.769519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.779172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.779190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.788104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.788126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.797339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.797356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.806368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.806385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.814897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.814914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.823511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.823528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.830667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.830685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.841372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.841389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.849881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.849899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.858634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.858652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.870468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.870485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.879921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.879939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.887948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.887966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.896659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.896677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.904398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.904415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.913699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.913716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.922990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.923007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.930394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.930413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.939544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.939561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.948158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.948176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.955437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.955455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.966726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.966744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.975360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.975378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.982893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.982910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.401 [2024-07-24 19:45:24.992412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.401 [2024-07-24 19:45:24.992429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.661 [2024-07-24 19:45:25.000779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.661 [2024-07-24 19:45:25.000797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.661 [2024-07-24 19:45:25.010270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.661 [2024-07-24 19:45:25.010287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.661 [2024-07-24 19:45:25.022721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.661 [2024-07-24 19:45:25.022739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.661 [2024-07-24 19:45:25.033206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.661 [2024-07-24 19:45:25.033223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.661 [2024-07-24 19:45:25.040698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.661 [2024-07-24 19:45:25.040716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.661 [2024-07-24 19:45:25.050243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.661 [2024-07-24 19:45:25.050261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.661 [2024-07-24 19:45:25.058139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.661 [2024-07-24 19:45:25.058156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.661 [2024-07-24 19:45:25.066778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.661 [2024-07-24 19:45:25.066795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.661 [2024-07-24 19:45:25.076533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.661 [2024-07-24 19:45:25.076551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.661 [2024-07-24 19:45:25.083575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.661 [2024-07-24 19:45:25.083593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.661 [2024-07-24 19:45:25.092074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.661 [2024-07-24 19:45:25.092091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.661 [2024-07-24 19:45:25.103248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.103265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.114371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.114388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.122934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.122951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.130094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.130112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.137583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.137600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.148363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.148382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.157548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.157567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.166360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.166378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.174751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.174769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.183540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.183557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.192918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.192935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.201851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.201869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.211040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.211065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.217976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.217994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.228938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.228955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.237484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.237501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.245979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.245997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.662 [2024-07-24 19:45:25.254631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.662 [2024-07-24 19:45:25.254648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.263788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.263807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.270668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.270684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.280670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.280687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.289551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.289568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.298584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.298601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.307597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.307614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.315050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.315068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.324965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.324982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.333407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.333424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.343137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.343154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.351791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.351808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.358728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.358745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.370356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.370374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.379062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.379080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.387759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.387776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.394643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.394661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.406420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.406439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.415775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.415794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.424524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.424541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.433550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.433569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.443192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.443210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.452120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.452139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.460470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.460488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.468851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.468869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.477458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.477476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.486765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.486783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.495758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.495775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.504873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.504891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.922 [2024-07-24 19:45:25.512613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.922 [2024-07-24 19:45:25.512631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.181 [2024-07-24 19:45:25.522228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.181 [2024-07-24 19:45:25.522246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.181 [2024-07-24 19:45:25.530756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.181 [2024-07-24 19:45:25.530774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.181 [2024-07-24 19:45:25.539449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.181 [2024-07-24 19:45:25.539467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.181 [2024-07-24 19:45:25.547850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.181 [2024-07-24 19:45:25.547877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.181 [2024-07-24 19:45:25.554700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.181 [2024-07-24 19:45:25.554717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.181 [2024-07-24 19:45:25.565665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.181 [2024-07-24 19:45:25.565685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.181 [2024-07-24 19:45:25.573692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.181 [2024-07-24 19:45:25.573711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.181 [2024-07-24 19:45:25.584519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.181 [2024-07-24 19:45:25.584536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.181 [2024-07-24 19:45:25.595654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.181 [2024-07-24 19:45:25.595673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.181 [2024-07-24 19:45:25.602667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.602685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.612480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.612497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.621500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.621517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.629912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.629934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.638947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.638965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.647979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.647998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.656894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.656912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.665428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.665446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.672504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.672522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.682828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.682847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.691400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.691418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.700540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.700558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.709000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.709018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.717858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.717876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.726674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.726692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.735061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.735078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.744616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.744634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.753750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.753768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.762318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.762336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.770868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.770886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.182 [2024-07-24 19:45:25.779080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.182 [2024-07-24 19:45:25.779098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.788016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.788034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.797137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.797159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.806257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.806275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.814999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.815017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.823680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.823698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.832190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.832208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.840458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.840474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.849485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.849502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.858210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.858227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.867166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.867183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.876219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.876237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.882926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.882943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.893193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.893211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.901824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.901841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.910158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.910176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.918694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.918712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.928222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.928240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.937165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.937182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.945531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.945549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.953945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.953962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.962739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.962762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.972587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.972605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.981291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.981308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.990440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.990457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:25.999638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:25.999655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:26.008385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:26.008403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:26.016567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:26.016585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:26.027411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:26.027429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:26.039841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:26.039859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:26.046668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:26.046686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:26.057127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:26.057144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.471 [2024-07-24 19:45:26.066713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.471 [2024-07-24 19:45:26.066731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.732 [2024-07-24 19:45:26.075904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.732 [2024-07-24 19:45:26.075923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.732 [2024-07-24 19:45:26.084487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.732 [2024-07-24 19:45:26.084504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.732 [2024-07-24 19:45:26.095990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.732 [2024-07-24 19:45:26.096008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.732 [2024-07-24 19:45:26.106106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.732 [2024-07-24 19:45:26.106124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.732 [2024-07-24 19:45:26.114426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.732 [2024-07-24 19:45:26.114443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.732 [2024-07-24 19:45:26.125917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.732 [2024-07-24 19:45:26.125934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.732 [2024-07-24 19:45:26.135663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.732 [2024-07-24 19:45:26.135680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.732 [2024-07-24 19:45:26.143663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.732 [2024-07-24 19:45:26.143684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.732 [2024-07-24 19:45:26.155427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.732 [2024-07-24 19:45:26.155443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.732 [2024-07-24 19:45:26.164238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.732 [2024-07-24 19:45:26.164256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.732 [2024-07-24 19:45:26.172859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.732 [2024-07-24 19:45:26.172878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.732 [2024-07-24 19:45:26.182703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.732 [2024-07-24 19:45:26.182721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.732 [2024-07-24 19:45:26.190940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.732 [2024-07-24 19:45:26.190957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.732 [2024-07-24 19:45:26.200223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.732 [2024-07-24 19:45:26.200242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.732 [2024-07-24 19:45:26.210482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.732 [2024-07-24 19:45:26.210500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.732 [2024-07-24 19:45:26.219408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.732 [2024-07-24 19:45:26.219426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.733 [2024-07-24 19:45:26.226372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.733 [2024-07-24 19:45:26.226388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.733 [2024-07-24 19:45:26.237554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.733 [2024-07-24 19:45:26.237571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.733 [2024-07-24 19:45:26.246108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.733 [2024-07-24 19:45:26.246126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.733 [2024-07-24 19:45:26.255150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.733 [2024-07-24 19:45:26.255168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.733 [2024-07-24 19:45:26.263807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.733 [2024-07-24 19:45:26.263825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.733 [2024-07-24 19:45:26.273163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.733 [2024-07-24 19:45:26.273181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.733 [2024-07-24 19:45:26.282280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.733 [2024-07-24 19:45:26.282297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.733 [2024-07-24 19:45:26.290574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.733 [2024-07-24 19:45:26.290592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.733 [2024-07-24 19:45:26.299346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.733 [2024-07-24 19:45:26.299363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.733 [2024-07-24 19:45:26.307520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.733 [2024-07-24 19:45:26.307538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.733 [2024-07-24 19:45:26.316542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.733 [2024-07-24 19:45:26.316559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.733 [2024-07-24 19:45:26.323585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.733 [2024-07-24 19:45:26.323602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.334822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.334841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.341974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.341991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.353156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.353174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.362035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.362057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.371430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.371447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.380151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.380168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.388662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.388680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.398607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.398624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.408188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.408206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.415947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.415964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.427831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.427849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.437508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.437526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.447269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.447286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.457957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.457975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.467109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.467127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.478434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.478451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.488827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.488844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.497006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.497023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.505286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.505303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.514345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.514363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.524583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.524601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.534486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.534504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.541600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.541617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.552363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.552380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.561370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.561388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.571038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.571061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.579719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.579737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.993 [2024-07-24 19:45:26.588996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.993 [2024-07-24 19:45:26.589013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.600926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.600944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.610802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.610820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.620085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.620103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.628568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.628584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.636837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.636854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.645418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.645436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.653770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.653787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.661869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.661886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.670941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.670959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.679843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.679860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.688714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.688732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.696812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.696830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.704675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.704699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.714449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.714467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.722924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.722942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.732348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.732366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.739559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.253 [2024-07-24 19:45:26.739577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.253 [2024-07-24 19:45:26.750783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.254 [2024-07-24 19:45:26.750800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.254 [2024-07-24 19:45:26.759155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.254 [2024-07-24 19:45:26.759172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.254 [2024-07-24 19:45:26.767719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.254 [2024-07-24 19:45:26.767736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.254 [2024-07-24 19:45:26.774514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.254 [2024-07-24 19:45:26.774531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.254 [2024-07-24 19:45:26.785329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.254 [2024-07-24 19:45:26.785347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.254 [2024-07-24 19:45:26.792092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.254 [2024-07-24 19:45:26.792110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.254 [2024-07-24 19:45:26.803146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.254 [2024-07-24 19:45:26.803165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.254 [2024-07-24 19:45:26.812291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.254 [2024-07-24 19:45:26.812309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.254 [2024-07-24 19:45:26.820917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.254 [2024-07-24 19:45:26.820935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.254 [2024-07-24 19:45:26.829462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.254 [2024-07-24 19:45:26.829481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.254 [2024-07-24 19:45:26.838444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.254 [2024-07-24 19:45:26.838462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.254 [2024-07-24 19:45:26.848417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.254 [2024-07-24 19:45:26.848436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.513 [2024-07-24 19:45:26.857028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.513 [2024-07-24 19:45:26.857055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.513 [2024-07-24 19:45:26.866003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.513 [2024-07-24 19:45:26.866021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.513 [2024-07-24 19:45:26.876076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.513 [2024-07-24 19:45:26.876094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.513 [2024-07-24 19:45:26.884740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.513 [2024-07-24 19:45:26.884758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.513 [2024-07-24 19:45:26.893743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.513 [2024-07-24 19:45:26.893761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.513 [2024-07-24 19:45:26.903603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.513 [2024-07-24 19:45:26.903621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.513 [2024-07-24 19:45:26.912202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.513 [2024-07-24 19:45:26.912220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.513 [2024-07-24 19:45:26.919507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.513 [2024-07-24 19:45:26.919526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.513 [2024-07-24 19:45:26.930500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.513 [2024-07-24 19:45:26.930518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.513 [2024-07-24 19:45:26.939510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.513 [2024-07-24 19:45:26.939528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.513 [2024-07-24 19:45:26.948809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.513 [2024-07-24 19:45:26.948827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.513 [2024-07-24 19:45:26.958318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.513 [2024-07-24 19:45:26.958336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.513 [2024-07-24 19:45:26.967544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.513 [2024-07-24 19:45:26.967562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.513 [2024-07-24 19:45:26.976953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.513 [2024-07-24 19:45:26.976971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.514 [2024-07-24 19:45:26.983665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.514 [2024-07-24 19:45:26.983683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.514 [2024-07-24 19:45:26.994950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.514 [2024-07-24 19:45:26.994967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.514 [2024-07-24 19:45:27.003505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.514 [2024-07-24 19:45:27.003527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.514 [2024-07-24 19:45:27.012965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.514 [2024-07-24 19:45:27.012984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.514 [2024-07-24 19:45:27.021925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.514 [2024-07-24 19:45:27.021942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.514 [2024-07-24 19:45:27.030544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.514 [2024-07-24 19:45:27.030563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.514 [2024-07-24 19:45:27.039378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.514 [2024-07-24 19:45:27.039396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.514 [2024-07-24 19:45:27.048108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.514 [2024-07-24 19:45:27.048127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.514 [2024-07-24 19:45:27.057205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.514 [2024-07-24 19:45:27.057223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.514 [2024-07-24 19:45:27.066350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.514 [2024-07-24 19:45:27.066369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.514 [2024-07-24 19:45:27.075369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.514 [2024-07-24 19:45:27.075387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.514 [2024-07-24 19:45:27.089530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.514 [2024-07-24 19:45:27.089549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.514 [2024-07-24 19:45:27.098038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.514 [2024-07-24 19:45:27.098062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.514 [2024-07-24 19:45:27.105980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.514 [2024-07-24 19:45:27.105998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.773 [2024-07-24 19:45:27.113889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.773 [2024-07-24 19:45:27.113908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.773 [2024-07-24 19:45:27.121649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.773 [2024-07-24 19:45:27.121667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.773 [2024-07-24 19:45:27.129060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.773 [2024-07-24 19:45:27.129077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.773 [2024-07-24 19:45:27.139713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.773 [2024-07-24 19:45:27.139732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.148563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.148581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.157230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.157248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.166063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.166082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.172964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.172987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.182988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.183007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.191819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.191838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.200940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.200958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.209324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.209342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.218301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.218318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.227783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.227801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.236777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.236795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.245716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.245734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.254055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.254071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.263084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.263102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.272071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.272087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.280754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.280772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.289351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.289368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.298116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.298133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.306687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.306705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.316352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.316369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.325767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.325785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.334258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.334275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.343626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.343648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.352041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.352065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.359506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.359523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.774 [2024-07-24 19:45:27.370723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.774 [2024-07-24 19:45:27.370741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.034 [2024-07-24 19:45:27.378395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.034 [2024-07-24 19:45:27.378412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.034 [2024-07-24 19:45:27.386840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.034 [2024-07-24 19:45:27.386858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.034 [2024-07-24 19:45:27.395783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.034 [2024-07-24 19:45:27.395800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.034 [2024-07-24 19:45:27.405809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.034 [2024-07-24 19:45:27.405826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.034 [2024-07-24 19:45:27.414539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.034 [2024-07-24 19:45:27.414555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.034 [2024-07-24 19:45:27.423932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.034 [2024-07-24 19:45:27.423949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.034 [2024-07-24 19:45:27.432947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.034 [2024-07-24 19:45:27.432964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.034 [2024-07-24 19:45:27.439699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.034 [2024-07-24 19:45:27.439716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.034 [2024-07-24 19:45:27.450717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.034 [2024-07-24 19:45:27.450735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.034 [2024-07-24 19:45:27.459925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.034 [2024-07-24 19:45:27.459942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.468773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.468790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.477771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.477789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.485597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.485614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.494308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.494325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.502910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.502927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.511062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.511082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.518169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.518186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.529810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.529828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.538695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.538713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.547986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.548004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.559255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.559272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.570881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.570898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.582166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.582184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.589566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.589582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.599735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.599753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.609097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.609115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.619816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.619832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.035 [2024-07-24 19:45:27.628915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.035 [2024-07-24 19:45:27.628932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.637811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.637829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.646787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.646804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.655835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.655853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.665095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.665112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.674561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.674579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.683400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.683417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.692676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.692694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.701824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.701841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.714633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.714650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.724274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.724291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.733319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.733336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.743505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.743521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.753484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.753501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.760858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.760875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.772758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.772775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.783940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.783958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.793363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.793380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.801794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.801811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.810739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.810757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.820874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.820891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.830303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.830321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.839319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.839336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.846639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.846656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.854368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.854385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.863633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.295 [2024-07-24 19:45:27.863650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.295 [2024-07-24 19:45:27.871289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.296 [2024-07-24 19:45:27.871306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.296 [2024-07-24 19:45:27.878337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.296 [2024-07-24 19:45:27.878354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.296 [2024-07-24 19:45:27.889937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.296 [2024-07-24 19:45:27.889954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.554 [2024-07-24 19:45:27.900947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:27.900965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:27.909975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:27.909993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:27.916955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:27.916971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 00:09:36.555 Latency(us) 00:09:36.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.555 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:36.555 Nvme1n1 : 5.00 15659.86 122.34 0.00 0.00 8167.81 2236.77 50149.29 00:09:36.555 =================================================================================================================== 00:09:36.555 Total : 15659.86 122.34 0.00 0.00 8167.81 2236.77 50149.29 00:09:36.555 [2024-07-24 19:45:27.924370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:27.924384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:27.932390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:27.932402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:27.940411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:27.940421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:27.948439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:27.948457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:27.956455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:27.956466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:27.964473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:27.964483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:27.972493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:27.972502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:27.980516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:27.980526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:27.988534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:27.988544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:27.996556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:27.996566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:28.004577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:28.004588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:28.012599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:28.012608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:28.020621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:28.020631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:28.028641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:28.028650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:28.036662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:28.036670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:28.044692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:28.044707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:28.052706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:28.052716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:28.060725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:28.060734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:28.068747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:28.068755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:28.076771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:28.076781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:28.084791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:28.084801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:28.092811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:28.092820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 [2024-07-24 19:45:28.100835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.555 [2024-07-24 19:45:28.100844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1943182) - No such process 00:09:36.555 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1943182 00:09:36.555 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.555 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.555 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.555 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.555 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:36.555 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.555 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.555 delay0 00:09:36.555 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.555 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:36.555 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.555 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.555 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.555 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:36.814 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.814 [2024-07-24 19:45:28.234355] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:43.383 Initializing NVMe Controllers 00:09:43.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:43.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:43.383 Initialization complete. Launching workers. 00:09:43.383 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 76 00:09:43.383 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 363, failed to submit 33 00:09:43.383 success 130, unsuccess 233, failed 0 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:43.383 rmmod nvme_tcp 00:09:43.383 rmmod nvme_fabrics 00:09:43.383 rmmod nvme_keyring 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1941311 ']' 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1941311 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1941311 ']' 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1941311 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1941311 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1941311' 00:09:43.383 killing process with pid 1941311 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1941311 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1941311 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.383 19:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.293 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:45.293 00:09:45.293 real 0m31.370s 00:09:45.293 user 0m42.800s 00:09:45.293 sys 0m10.519s 00:09:45.293 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.293 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.293 ************************************ 00:09:45.293 END TEST nvmf_zcopy 00:09:45.293 ************************************ 00:09:45.293 19:45:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:45.293 19:45:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:45.293 19:45:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.293 19:45:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.293 ************************************ 00:09:45.293 START TEST nvmf_nmic 00:09:45.293 ************************************ 00:09:45.293 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:45.293 * Looking for test storage... 00:09:45.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.293 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.293 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:45.552 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:09:45.553 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:50.830 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:50.830 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:50.830 Found net devices under 0000:86:00.0: cvl_0_0 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:50.830 Found net devices under 0000:86:00.1: cvl_0_1 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:50.830 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.830 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:50.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:09:50.831 00:09:50.831 --- 10.0.0.2 ping statistics --- 00:09:50.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.831 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:09:50.831 00:09:50.831 --- 10.0.0.1 ping statistics --- 00:09:50.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.831 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1948547 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1948547 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1948547 ']' 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:50.831 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:50.831 [2024-07-24 19:45:42.158966] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:09:50.831 [2024-07-24 19:45:42.159011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.831 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.831 [2024-07-24 19:45:42.216653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:50.831 [2024-07-24 19:45:42.298441] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.831 [2024-07-24 19:45:42.298479] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.831 [2024-07-24 19:45:42.298486] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.831 [2024-07-24 19:45:42.298493] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.831 [2024-07-24 19:45:42.298498] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.831 [2024-07-24 19:45:42.298551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.831 [2024-07-24 19:45:42.298646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.831 [2024-07-24 19:45:42.298729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.831 [2024-07-24 19:45:42.298731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.400 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.400 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:51.400 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.400 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:51.400 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.660 [2024-07-24 19:45:43.013500] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.660 Malloc0 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.660 [2024-07-24 19:45:43.065095] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:51.660 test case1: single bdev can't be used in multiple subsystems 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.660 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.660 [2024-07-24 19:45:43.088999] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:51.660 [2024-07-24 19:45:43.089019] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:51.661 [2024-07-24 19:45:43.089026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.661 request: 00:09:51.661 { 00:09:51.661 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:51.661 "namespace": { 00:09:51.661 "bdev_name": "Malloc0", 00:09:51.661 "no_auto_visible": false 00:09:51.661 }, 00:09:51.661 "method": "nvmf_subsystem_add_ns", 00:09:51.661 "req_id": 1 00:09:51.661 } 00:09:51.661 Got JSON-RPC error response 00:09:51.661 response: 00:09:51.661 { 00:09:51.661 "code": -32602, 00:09:51.661 "message": "Invalid parameters" 00:09:51.661 } 00:09:51.661 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:51.661 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:51.661 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:51.661 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:51.661 Adding namespace failed - expected result. 00:09:51.661 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:51.661 test case2: host connect to nvmf target in multiple paths 00:09:51.661 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:51.661 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.661 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.661 [2024-07-24 19:45:43.101140] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:51.661 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.661 19:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:52.640 19:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:54.022 19:45:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:54.022 19:45:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:54.022 19:45:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:54.022 19:45:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:54.022 19:45:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:55.932 19:45:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:55.932 19:45:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:55.932 19:45:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:55.932 19:45:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:55.932 19:45:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:55.932 19:45:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:55.932 19:45:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:55.932 [global] 00:09:55.932 thread=1 00:09:55.932 invalidate=1 00:09:55.932 rw=write 00:09:55.932 time_based=1 00:09:55.932 runtime=1 00:09:55.932 ioengine=libaio 00:09:55.932 direct=1 00:09:55.932 bs=4096 00:09:55.932 iodepth=1 00:09:55.932 norandommap=0 00:09:55.932 numjobs=1 00:09:55.932 00:09:55.932 verify_dump=1 00:09:55.932 verify_backlog=512 00:09:55.932 verify_state_save=0 00:09:55.932 do_verify=1 00:09:55.932 verify=crc32c-intel 00:09:55.932 [job0] 00:09:55.932 filename=/dev/nvme0n1 00:09:55.932 Could not set queue depth (nvme0n1) 00:09:56.192 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.192 fio-3.35 00:09:56.192 Starting 1 thread 00:09:57.573 00:09:57.573 job0: (groupid=0, jobs=1): err= 0: pid=1949630: Wed Jul 24 19:45:48 2024 00:09:57.573 read: IOPS=19, BW=78.7KiB/s (80.6kB/s)(80.0KiB/1016msec) 00:09:57.573 slat (nsec): min=10508, max=23823, avg=22198.70, stdev=2830.24 00:09:57.573 clat (usec): min=41337, max=42971, avg=42036.92, stdev=352.10 00:09:57.573 lat (usec): min=41347, max=42995, avg=42059.12, stdev=353.56 00:09:57.573 clat percentiles (usec): 00:09:57.573 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:09:57.573 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:57.573 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:09:57.573 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:57.573 | 99.99th=[42730] 00:09:57.573 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:09:57.573 slat (usec): min=10, max=26555, avg=64.23, stdev=1173.06 00:09:57.573 clat (usec): min=213, max=765, avg=272.98, stdev=104.69 00:09:57.573 lat (usec): min=225, max=27271, avg=337.21, stdev=1197.19 00:09:57.573 clat percentiles (usec): 00:09:57.573 | 1.00th=[ 217], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 221], 00:09:57.573 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:09:57.573 | 70.00th=[ 241], 80.00th=[ 285], 90.00th=[ 445], 95.00th=[ 586], 00:09:57.573 | 99.00th=[ 701], 99.50th=[ 734], 99.90th=[ 766], 99.95th=[ 766], 00:09:57.573 | 99.99th=[ 766] 00:09:57.573 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:57.573 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:57.573 lat (usec) : 250=70.68%, 500=20.11%, 750=5.08%, 1000=0.38% 00:09:57.573 lat (msec) : 50=3.76% 00:09:57.573 cpu : usr=0.49%, sys=0.89%, ctx=536, majf=0, minf=2 00:09:57.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.573 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.573 00:09:57.573 Run status group 0 (all jobs): 00:09:57.573 READ: bw=78.7KiB/s (80.6kB/s), 78.7KiB/s-78.7KiB/s (80.6kB/s-80.6kB/s), io=80.0KiB (81.9kB), run=1016-1016msec 00:09:57.573 WRITE: bw=2016KiB/s (2064kB/s), 2016KiB/s-2016KiB/s (2064kB/s-2064kB/s), io=2048KiB (2097kB), run=1016-1016msec 00:09:57.573 00:09:57.573 Disk stats (read/write): 00:09:57.573 nvme0n1: ios=42/512, merge=0/0, ticks=1684/132, in_queue=1816, util=98.70% 00:09:57.573 19:45:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:57.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:57.573 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:57.573 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:57.573 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:57.573 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.573 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:57.573 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.573 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:57.573 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:57.573 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.574 rmmod nvme_tcp 00:09:57.574 rmmod nvme_fabrics 00:09:57.574 rmmod nvme_keyring 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1948547 ']' 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1948547 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1948547 ']' 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1948547 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1948547 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1948547' 00:09:57.574 killing process with pid 1948547 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1948547 00:09:57.574 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1948547 00:09:57.833 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.833 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.833 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.833 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.833 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.833 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.833 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.833 19:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:00.375 00:10:00.375 real 0m14.641s 00:10:00.375 user 0m35.044s 00:10:00.375 sys 0m4.574s 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:00.375 ************************************ 00:10:00.375 END TEST nvmf_nmic 00:10:00.375 ************************************ 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.375 ************************************ 00:10:00.375 START TEST nvmf_fio_target 00:10:00.375 ************************************ 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:00.375 * Looking for test storage... 00:10:00.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.375 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:00.376 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:00.376 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.376 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:00.376 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:00.376 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:00.376 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.376 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.376 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.376 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:00.376 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:00.376 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:00.376 19:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:05.656 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:05.656 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:05.656 Found net devices under 0000:86:00.0: cvl_0_0 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:05.656 Found net devices under 0000:86:00.1: cvl_0_1 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:05.656 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:05.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:10:05.657 00:10:05.657 --- 10.0.0.2 ping statistics --- 00:10:05.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.657 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:05.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:10:05.657 00:10:05.657 --- 10.0.0.1 ping statistics --- 00:10:05.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.657 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1953380 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1953380 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1953380 ']' 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.657 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.657 [2024-07-24 19:45:56.993455] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:10:05.657 [2024-07-24 19:45:56.993497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.657 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.657 [2024-07-24 19:45:57.051897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:05.657 [2024-07-24 19:45:57.132960] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.657 [2024-07-24 19:45:57.132996] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.657 [2024-07-24 19:45:57.133003] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.657 [2024-07-24 19:45:57.133009] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.657 [2024-07-24 19:45:57.133017] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.657 [2024-07-24 19:45:57.133063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.657 [2024-07-24 19:45:57.133114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.657 [2024-07-24 19:45:57.133220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.657 [2024-07-24 19:45:57.133221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.225 19:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:06.225 19:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:06.225 19:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:06.225 19:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:06.225 19:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.485 19:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.485 19:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:06.485 [2024-07-24 19:45:57.990765] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.485 19:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:06.745 19:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:06.745 19:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.005 19:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:07.005 19:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.265 19:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:07.265 19:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.265 19:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:07.265 19:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:07.525 19:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.785 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:07.785 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:08.045 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:08.045 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:08.045 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:08.045 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:08.305 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:08.566 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:08.566 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.566 19:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:08.566 19:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:08.825 19:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.085 [2024-07-24 19:46:00.501099] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.085 19:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:09.344 19:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:09.344 19:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:10.724 19:46:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:10.724 19:46:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:10.724 19:46:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:10.724 19:46:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:10.724 19:46:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:10.724 19:46:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:12.632 19:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:12.632 19:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:12.632 19:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:12.632 19:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:12.632 19:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:12.632 19:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:12.632 19:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:12.632 [global] 00:10:12.632 thread=1 00:10:12.632 invalidate=1 00:10:12.632 rw=write 00:10:12.632 time_based=1 00:10:12.632 runtime=1 00:10:12.632 ioengine=libaio 00:10:12.632 direct=1 00:10:12.632 bs=4096 00:10:12.632 iodepth=1 00:10:12.632 norandommap=0 00:10:12.632 numjobs=1 00:10:12.632 00:10:12.632 verify_dump=1 00:10:12.632 verify_backlog=512 00:10:12.632 verify_state_save=0 00:10:12.632 do_verify=1 00:10:12.632 verify=crc32c-intel 00:10:12.632 [job0] 00:10:12.632 filename=/dev/nvme0n1 00:10:12.632 [job1] 00:10:12.632 filename=/dev/nvme0n2 00:10:12.632 [job2] 00:10:12.632 filename=/dev/nvme0n3 00:10:12.633 [job3] 00:10:12.633 filename=/dev/nvme0n4 00:10:12.633 Could not set queue depth (nvme0n1) 00:10:12.633 Could not set queue depth (nvme0n2) 00:10:12.633 Could not set queue depth (nvme0n3) 00:10:12.633 Could not set queue depth (nvme0n4) 00:10:12.892 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.892 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.892 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.892 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.892 fio-3.35 00:10:12.892 Starting 4 threads 00:10:14.273 00:10:14.273 job0: (groupid=0, jobs=1): err= 0: pid=1954761: Wed Jul 24 19:46:05 2024 00:10:14.273 read: IOPS=109, BW=436KiB/s (447kB/s)(444KiB/1018msec) 00:10:14.273 slat (nsec): min=7135, max=23649, avg=10158.99, stdev=5146.83 00:10:14.273 clat (usec): min=489, max=43003, avg=7408.61, stdev=15346.01 00:10:14.273 lat (usec): min=497, max=43026, avg=7418.77, stdev=15350.52 00:10:14.273 clat percentiles (usec): 00:10:14.273 | 1.00th=[ 494], 5.00th=[ 502], 10.00th=[ 502], 20.00th=[ 506], 00:10:14.273 | 30.00th=[ 519], 40.00th=[ 537], 50.00th=[ 627], 60.00th=[ 676], 00:10:14.273 | 70.00th=[ 734], 80.00th=[ 898], 90.00th=[41681], 95.00th=[42206], 00:10:14.273 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:14.273 | 99.99th=[43254] 00:10:14.273 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:10:14.273 slat (nsec): min=4417, max=65117, avg=12921.76, stdev=4186.89 00:10:14.273 clat (usec): min=223, max=950, avg=362.83, stdev=128.86 00:10:14.273 lat (usec): min=235, max=963, avg=375.76, stdev=129.49 00:10:14.273 clat percentiles (usec): 00:10:14.273 | 1.00th=[ 225], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 249], 00:10:14.273 | 30.00th=[ 265], 40.00th=[ 293], 50.00th=[ 326], 60.00th=[ 355], 00:10:14.273 | 70.00th=[ 412], 80.00th=[ 465], 90.00th=[ 578], 95.00th=[ 603], 00:10:14.273 | 99.00th=[ 709], 99.50th=[ 758], 99.90th=[ 955], 99.95th=[ 955], 00:10:14.273 | 99.99th=[ 955] 00:10:14.273 bw ( KiB/s): min= 4096, max= 4096, per=28.76%, avg=4096.00, stdev= 0.00, samples=1 00:10:14.273 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:14.273 lat (usec) : 250=16.69%, 500=52.49%, 750=26.32%, 1000=1.28% 00:10:14.273 lat (msec) : 2=0.16%, 10=0.16%, 50=2.89% 00:10:14.273 cpu : usr=0.29%, sys=0.98%, ctx=624, majf=0, minf=1 00:10:14.273 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.273 issued rwts: total=111,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.273 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.273 job1: (groupid=0, jobs=1): err= 0: pid=1954777: Wed Jul 24 19:46:05 2024 00:10:14.273 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:14.273 slat (nsec): min=7294, max=19175, avg=8256.06, stdev=757.48 00:10:14.273 clat (usec): min=322, max=1475, avg=538.01, stdev=73.57 00:10:14.273 lat (usec): min=330, max=1487, avg=546.27, stdev=73.67 00:10:14.273 clat percentiles (usec): 00:10:14.273 | 1.00th=[ 343], 5.00th=[ 478], 10.00th=[ 490], 20.00th=[ 506], 00:10:14.273 | 30.00th=[ 515], 40.00th=[ 519], 50.00th=[ 523], 60.00th=[ 529], 00:10:14.273 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 627], 95.00th=[ 701], 00:10:14.273 | 99.00th=[ 742], 99.50th=[ 750], 99.90th=[ 922], 99.95th=[ 1483], 00:10:14.273 | 99.99th=[ 1483] 00:10:14.273 write: IOPS=1094, BW=4380KiB/s (4485kB/s)(4384KiB/1001msec); 0 zone resets 00:10:14.273 slat (usec): min=4, max=43395, avg=77.73, stdev=1564.00 00:10:14.273 clat (usec): min=217, max=7992, avg=317.12, stdev=290.48 00:10:14.273 lat (usec): min=229, max=44024, avg=394.85, stdev=1605.36 00:10:14.273 clat percentiles (usec): 00:10:14.273 | 1.00th=[ 221], 5.00th=[ 221], 10.00th=[ 223], 20.00th=[ 225], 00:10:14.273 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 265], 00:10:14.273 | 70.00th=[ 310], 80.00th=[ 375], 90.00th=[ 486], 95.00th=[ 594], 00:10:14.273 | 99.00th=[ 783], 99.50th=[ 840], 99.90th=[ 3458], 99.95th=[ 7963], 00:10:14.273 | 99.99th=[ 7963] 00:10:14.273 bw ( KiB/s): min= 4096, max= 4096, per=28.76%, avg=4096.00, stdev= 0.00, samples=1 00:10:14.273 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:14.273 lat (usec) : 250=28.11%, 500=26.37%, 750=44.48%, 1000=0.80% 00:10:14.273 lat (msec) : 2=0.09%, 4=0.09%, 10=0.05% 00:10:14.273 cpu : usr=1.90%, sys=3.40%, ctx=2125, majf=0, minf=2 00:10:14.273 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.273 issued rwts: total=1024,1096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.273 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.273 job2: (groupid=0, jobs=1): err= 0: pid=1954810: Wed Jul 24 19:46:05 2024 00:10:14.273 read: IOPS=507, BW=2029KiB/s (2078kB/s)(2084KiB/1027msec) 00:10:14.273 slat (nsec): min=6782, max=37222, avg=9414.72, stdev=5175.56 00:10:14.273 clat (usec): min=348, max=43041, avg=1289.60, stdev=5427.38 00:10:14.273 lat (usec): min=356, max=43064, avg=1299.01, stdev=5429.07 00:10:14.273 clat percentiles (usec): 00:10:14.273 | 1.00th=[ 404], 5.00th=[ 494], 10.00th=[ 529], 20.00th=[ 537], 00:10:14.273 | 30.00th=[ 545], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 562], 00:10:14.273 | 70.00th=[ 562], 80.00th=[ 570], 90.00th=[ 742], 95.00th=[ 766], 00:10:14.273 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:10:14.273 | 99.99th=[43254] 00:10:14.273 write: IOPS=997, BW=3988KiB/s (4084kB/s)(4096KiB/1027msec); 0 zone resets 00:10:14.273 slat (usec): min=9, max=3963, avg=15.22, stdev=123.55 00:10:14.273 clat (usec): min=215, max=1513, avg=321.68, stdev=160.08 00:10:14.273 lat (usec): min=225, max=4356, avg=336.89, stdev=205.52 00:10:14.273 clat percentiles (usec): 00:10:14.273 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 229], 20.00th=[ 237], 00:10:14.273 | 30.00th=[ 245], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 285], 00:10:14.273 | 70.00th=[ 314], 80.00th=[ 379], 90.00th=[ 469], 95.00th=[ 578], 00:10:14.273 | 99.00th=[ 1336], 99.50th=[ 1434], 99.90th=[ 1434], 99.95th=[ 1516], 00:10:14.273 | 99.99th=[ 1516] 00:10:14.273 bw ( KiB/s): min= 3472, max= 4720, per=28.76%, avg=4096.00, stdev=882.47, samples=2 00:10:14.274 iops : min= 868, max= 1180, avg=1024.00, stdev=220.62, samples=2 00:10:14.274 lat (usec) : 250=23.11%, 500=40.26%, 750=32.62%, 1000=2.72% 00:10:14.274 lat (msec) : 2=0.71%, 50=0.58% 00:10:14.274 cpu : usr=0.49%, sys=1.95%, ctx=1549, majf=0, minf=1 00:10:14.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.274 issued rwts: total=521,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.274 job3: (groupid=0, jobs=1): err= 0: pid=1954821: Wed Jul 24 19:46:05 2024 00:10:14.274 read: IOPS=672, BW=2689KiB/s (2754kB/s)(2692KiB/1001msec) 00:10:14.274 slat (nsec): min=7611, max=40734, avg=8803.36, stdev=2745.76 00:10:14.274 clat (usec): min=339, max=42054, avg=1029.03, stdev=4199.35 00:10:14.274 lat (usec): min=348, max=42078, avg=1037.84, stdev=4200.75 00:10:14.274 clat percentiles (usec): 00:10:14.274 | 1.00th=[ 412], 5.00th=[ 494], 10.00th=[ 523], 20.00th=[ 545], 00:10:14.274 | 30.00th=[ 570], 40.00th=[ 586], 50.00th=[ 586], 60.00th=[ 603], 00:10:14.274 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 701], 95.00th=[ 734], 00:10:14.274 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:14.274 | 99.99th=[42206] 00:10:14.274 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:14.274 slat (nsec): min=7005, max=42049, avg=12196.08, stdev=1945.83 00:10:14.274 clat (usec): min=219, max=2714, avg=277.13, stdev=111.77 00:10:14.274 lat (usec): min=230, max=2724, avg=289.32, stdev=111.97 00:10:14.274 clat percentiles (usec): 00:10:14.274 | 1.00th=[ 223], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 231], 00:10:14.274 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 251], 00:10:14.274 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 375], 95.00th=[ 469], 00:10:14.274 | 99.00th=[ 586], 99.50th=[ 603], 99.90th=[ 930], 99.95th=[ 2704], 00:10:14.274 | 99.99th=[ 2704] 00:10:14.274 bw ( KiB/s): min= 4096, max= 4096, per=28.76%, avg=4096.00, stdev= 0.00, samples=1 00:10:14.274 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:14.274 lat (usec) : 250=35.89%, 500=25.10%, 750=37.48%, 1000=0.94% 00:10:14.274 lat (msec) : 2=0.12%, 4=0.06%, 50=0.41% 00:10:14.274 cpu : usr=1.90%, sys=2.40%, ctx=1700, majf=0, minf=1 00:10:14.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.274 issued rwts: total=673,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.274 00:10:14.274 Run status group 0 (all jobs): 00:10:14.274 READ: bw=9071KiB/s (9289kB/s), 436KiB/s-4092KiB/s (447kB/s-4190kB/s), io=9316KiB (9540kB), run=1001-1027msec 00:10:14.274 WRITE: bw=13.9MiB/s (14.6MB/s), 2012KiB/s-4380KiB/s (2060kB/s-4485kB/s), io=14.3MiB (15.0MB), run=1001-1027msec 00:10:14.274 00:10:14.274 Disk stats (read/write): 00:10:14.274 nvme0n1: ios=156/512, merge=0/0, ticks=693/185, in_queue=878, util=84.37% 00:10:14.274 nvme0n2: ios=770/1024, merge=0/0, ticks=746/313, in_queue=1059, util=90.73% 00:10:14.274 nvme0n3: ios=567/1024, merge=0/0, ticks=652/334, in_queue=986, util=91.00% 00:10:14.274 nvme0n4: ios=563/597, merge=0/0, ticks=817/179, in_queue=996, util=95.59% 00:10:14.274 19:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:14.274 [global] 00:10:14.274 thread=1 00:10:14.274 invalidate=1 00:10:14.274 rw=randwrite 00:10:14.274 time_based=1 00:10:14.274 runtime=1 00:10:14.274 ioengine=libaio 00:10:14.274 direct=1 00:10:14.274 bs=4096 00:10:14.274 iodepth=1 00:10:14.274 norandommap=0 00:10:14.274 numjobs=1 00:10:14.274 00:10:14.274 verify_dump=1 00:10:14.274 verify_backlog=512 00:10:14.274 verify_state_save=0 00:10:14.274 do_verify=1 00:10:14.274 verify=crc32c-intel 00:10:14.274 [job0] 00:10:14.274 filename=/dev/nvme0n1 00:10:14.274 [job1] 00:10:14.274 filename=/dev/nvme0n2 00:10:14.274 [job2] 00:10:14.274 filename=/dev/nvme0n3 00:10:14.274 [job3] 00:10:14.274 filename=/dev/nvme0n4 00:10:14.274 Could not set queue depth (nvme0n1) 00:10:14.274 Could not set queue depth (nvme0n2) 00:10:14.274 Could not set queue depth (nvme0n3) 00:10:14.274 Could not set queue depth (nvme0n4) 00:10:14.533 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.533 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.533 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.533 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.533 fio-3.35 00:10:14.533 Starting 4 threads 00:10:15.945 00:10:15.945 job0: (groupid=0, jobs=1): err= 0: pid=1955261: Wed Jul 24 19:46:07 2024 00:10:15.945 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:15.945 slat (nsec): min=7110, max=40475, avg=8378.15, stdev=1669.32 00:10:15.945 clat (usec): min=363, max=29476, avg=579.01, stdev=1116.10 00:10:15.945 lat (usec): min=371, max=29484, avg=587.39, stdev=1116.12 00:10:15.945 clat percentiles (usec): 00:10:15.945 | 1.00th=[ 371], 5.00th=[ 383], 10.00th=[ 412], 20.00th=[ 465], 00:10:15.945 | 30.00th=[ 494], 40.00th=[ 506], 50.00th=[ 519], 60.00th=[ 529], 00:10:15.945 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 619], 95.00th=[ 832], 00:10:15.945 | 99.00th=[ 1037], 99.50th=[ 1045], 99.90th=[21103], 99.95th=[29492], 00:10:15.945 | 99.99th=[29492] 00:10:15.945 write: IOPS=1090, BW=4364KiB/s (4468kB/s)(4368KiB/1001msec); 0 zone resets 00:10:15.945 slat (usec): min=10, max=5348, avg=17.12, stdev=161.51 00:10:15.945 clat (usec): min=221, max=1293, avg=338.39, stdev=120.21 00:10:15.945 lat (usec): min=234, max=5721, avg=355.50, stdev=202.42 00:10:15.946 clat percentiles (usec): 00:10:15.946 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 247], 20.00th=[ 277], 00:10:15.946 | 30.00th=[ 314], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 318], 00:10:15.946 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 392], 95.00th=[ 523], 00:10:15.946 | 99.00th=[ 816], 99.50th=[ 1004], 99.90th=[ 1287], 99.95th=[ 1287], 00:10:15.946 | 99.99th=[ 1287] 00:10:15.946 bw ( KiB/s): min= 4656, max= 4656, per=30.12%, avg=4656.00, stdev= 0.00, samples=1 00:10:15.946 iops : min= 1164, max= 1164, avg=1164.00, stdev= 0.00, samples=1 00:10:15.946 lat (usec) : 250=6.33%, 500=58.08%, 750=31.05%, 1000=3.40% 00:10:15.946 lat (msec) : 2=1.04%, 50=0.09% 00:10:15.946 cpu : usr=2.20%, sys=3.10%, ctx=2119, majf=0, minf=1 00:10:15.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.946 issued rwts: total=1024,1092,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.946 job1: (groupid=0, jobs=1): err= 0: pid=1955284: Wed Jul 24 19:46:07 2024 00:10:15.946 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:15.946 slat (nsec): min=8084, max=20415, avg=8953.31, stdev=941.08 00:10:15.946 clat (usec): min=364, max=42058, avg=1453.06, stdev=5950.82 00:10:15.946 lat (usec): min=372, max=42069, avg=1462.02, stdev=5951.18 00:10:15.946 clat percentiles (usec): 00:10:15.946 | 1.00th=[ 371], 5.00th=[ 375], 10.00th=[ 383], 20.00th=[ 424], 00:10:15.946 | 30.00th=[ 486], 40.00th=[ 502], 50.00th=[ 506], 60.00th=[ 515], 00:10:15.946 | 70.00th=[ 523], 80.00th=[ 553], 90.00th=[ 832], 95.00th=[ 1037], 00:10:15.946 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:15.946 | 99.99th=[42206] 00:10:15.946 write: IOPS=675, BW=2701KiB/s (2766kB/s)(2704KiB/1001msec); 0 zone resets 00:10:15.946 slat (nsec): min=10680, max=39996, avg=13364.12, stdev=2657.87 00:10:15.946 clat (usec): min=220, max=1675, avg=347.23, stdev=117.33 00:10:15.946 lat (usec): min=233, max=1698, avg=360.59, stdev=117.91 00:10:15.946 clat percentiles (usec): 00:10:15.946 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 258], 00:10:15.946 | 30.00th=[ 277], 40.00th=[ 297], 50.00th=[ 318], 60.00th=[ 343], 00:10:15.946 | 70.00th=[ 379], 80.00th=[ 420], 90.00th=[ 486], 95.00th=[ 578], 00:10:15.946 | 99.00th=[ 709], 99.50th=[ 832], 99.90th=[ 1680], 99.95th=[ 1680], 00:10:15.946 | 99.99th=[ 1680] 00:10:15.946 bw ( KiB/s): min= 4096, max= 4096, per=26.49%, avg=4096.00, stdev= 0.00, samples=1 00:10:15.946 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:15.946 lat (usec) : 250=8.16%, 500=60.19%, 750=26.60%, 1000=1.85% 00:10:15.946 lat (msec) : 2=2.10%, 10=0.08%, 20=0.08%, 50=0.93% 00:10:15.946 cpu : usr=1.20%, sys=2.00%, ctx=1189, majf=0, minf=2 00:10:15.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.946 issued rwts: total=512,676,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.946 job2: (groupid=0, jobs=1): err= 0: pid=1955316: Wed Jul 24 19:46:07 2024 00:10:15.946 read: IOPS=514, BW=2057KiB/s (2106kB/s)(2104KiB/1023msec) 00:10:15.946 slat (nsec): min=6731, max=28677, avg=8443.50, stdev=2354.40 00:10:15.946 clat (usec): min=377, max=42066, avg=1144.51, stdev=5004.37 00:10:15.946 lat (usec): min=385, max=42089, avg=1152.95, stdev=5005.79 00:10:15.946 clat percentiles (usec): 00:10:15.946 | 1.00th=[ 441], 5.00th=[ 453], 10.00th=[ 465], 20.00th=[ 486], 00:10:15.946 | 30.00th=[ 502], 40.00th=[ 510], 50.00th=[ 515], 60.00th=[ 519], 00:10:15.946 | 70.00th=[ 529], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 676], 00:10:15.946 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:15.946 | 99.99th=[42206] 00:10:15.946 write: IOPS=1000, BW=4004KiB/s (4100kB/s)(4096KiB/1023msec); 0 zone resets 00:10:15.946 slat (usec): min=6, max=120, avg=12.82, stdev= 8.65 00:10:15.946 clat (usec): min=226, max=5427, avg=385.07, stdev=205.13 00:10:15.946 lat (usec): min=238, max=5468, avg=397.89, stdev=206.53 00:10:15.946 clat percentiles (usec): 00:10:15.946 | 1.00th=[ 239], 5.00th=[ 269], 10.00th=[ 302], 20.00th=[ 314], 00:10:15.946 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 318], 60.00th=[ 322], 00:10:15.946 | 70.00th=[ 371], 80.00th=[ 474], 90.00th=[ 578], 95.00th=[ 611], 00:10:15.946 | 99.00th=[ 816], 99.50th=[ 1106], 99.90th=[ 1287], 99.95th=[ 5407], 00:10:15.946 | 99.99th=[ 5407] 00:10:15.946 bw ( KiB/s): min= 4096, max= 4096, per=26.49%, avg=4096.00, stdev= 0.00, samples=2 00:10:15.946 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:15.946 lat (usec) : 250=1.61%, 500=62.71%, 750=32.97%, 1000=1.55% 00:10:15.946 lat (msec) : 2=0.58%, 10=0.06%, 50=0.52% 00:10:15.946 cpu : usr=1.37%, sys=1.96%, ctx=1552, majf=0, minf=1 00:10:15.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.946 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.946 job3: (groupid=0, jobs=1): err= 0: pid=1955327: Wed Jul 24 19:46:07 2024 00:10:15.946 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:15.946 slat (nsec): min=3717, max=12812, avg=5676.22, stdev=1201.88 00:10:15.946 clat (usec): min=416, max=41243, avg=642.99, stdev=1794.25 00:10:15.946 lat (usec): min=420, max=41250, avg=648.67, stdev=1794.39 00:10:15.946 clat percentiles (usec): 00:10:15.946 | 1.00th=[ 437], 5.00th=[ 482], 10.00th=[ 529], 20.00th=[ 537], 00:10:15.946 | 30.00th=[ 545], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 562], 00:10:15.946 | 70.00th=[ 562], 80.00th=[ 570], 90.00th=[ 586], 95.00th=[ 693], 00:10:15.946 | 99.00th=[ 979], 99.50th=[ 988], 99.90th=[41157], 99.95th=[41157], 00:10:15.946 | 99.99th=[41157] 00:10:15.946 write: IOPS=1160, BW=4643KiB/s (4755kB/s)(4648KiB/1001msec); 0 zone resets 00:10:15.946 slat (nsec): min=4687, max=43542, avg=7656.33, stdev=2087.73 00:10:15.946 clat (usec): min=207, max=3427, avg=274.58, stdev=144.35 00:10:15.946 lat (usec): min=212, max=3434, avg=282.24, stdev=145.13 00:10:15.946 clat percentiles (usec): 00:10:15.946 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 223], 00:10:15.946 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 247], 00:10:15.946 | 70.00th=[ 258], 80.00th=[ 273], 90.00th=[ 347], 95.00th=[ 545], 00:10:15.946 | 99.00th=[ 603], 99.50th=[ 660], 99.90th=[ 2114], 99.95th=[ 3425], 00:10:15.946 | 99.99th=[ 3425] 00:10:15.946 bw ( KiB/s): min= 4096, max= 4096, per=26.49%, avg=4096.00, stdev= 0.00, samples=1 00:10:15.946 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:15.946 lat (usec) : 250=34.26%, 500=18.39%, 750=45.47%, 1000=1.60% 00:10:15.946 lat (msec) : 2=0.09%, 4=0.09%, 50=0.09% 00:10:15.946 cpu : usr=0.40%, sys=1.90%, ctx=2190, majf=0, minf=1 00:10:15.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.946 issued rwts: total=1024,1162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.946 00:10:15.946 Run status group 0 (all jobs): 00:10:15.946 READ: bw=11.8MiB/s (12.4MB/s), 2046KiB/s-4092KiB/s (2095kB/s-4190kB/s), io=12.1MiB (12.6MB), run=1001-1023msec 00:10:15.946 WRITE: bw=15.1MiB/s (15.8MB/s), 2701KiB/s-4643KiB/s (2766kB/s-4755kB/s), io=15.4MiB (16.2MB), run=1001-1023msec 00:10:15.946 00:10:15.946 Disk stats (read/write): 00:10:15.946 nvme0n1: ios=780/1024, merge=0/0, ticks=910/332, in_queue=1242, util=96.69% 00:10:15.946 nvme0n2: ios=281/512, merge=0/0, ticks=1641/164, in_queue=1805, util=96.10% 00:10:15.946 nvme0n3: ios=540/1024, merge=0/0, ticks=1263/379, in_queue=1642, util=97.41% 00:10:15.946 nvme0n4: ios=792/1024, merge=0/0, ticks=716/280, in_queue=996, util=95.82% 00:10:15.946 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:15.946 [global] 00:10:15.946 thread=1 00:10:15.946 invalidate=1 00:10:15.946 rw=write 00:10:15.946 time_based=1 00:10:15.946 runtime=1 00:10:15.946 ioengine=libaio 00:10:15.946 direct=1 00:10:15.946 bs=4096 00:10:15.946 iodepth=128 00:10:15.946 norandommap=0 00:10:15.946 numjobs=1 00:10:15.946 00:10:15.946 verify_dump=1 00:10:15.946 verify_backlog=512 00:10:15.946 verify_state_save=0 00:10:15.946 do_verify=1 00:10:15.946 verify=crc32c-intel 00:10:15.946 [job0] 00:10:15.946 filename=/dev/nvme0n1 00:10:15.946 [job1] 00:10:15.946 filename=/dev/nvme0n2 00:10:15.946 [job2] 00:10:15.946 filename=/dev/nvme0n3 00:10:15.946 [job3] 00:10:15.946 filename=/dev/nvme0n4 00:10:15.946 Could not set queue depth (nvme0n1) 00:10:15.946 Could not set queue depth (nvme0n2) 00:10:15.946 Could not set queue depth (nvme0n3) 00:10:15.946 Could not set queue depth (nvme0n4) 00:10:16.229 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:16.229 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:16.229 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:16.229 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:16.229 fio-3.35 00:10:16.229 Starting 4 threads 00:10:17.631 00:10:17.631 job0: (groupid=0, jobs=1): err= 0: pid=1955716: Wed Jul 24 19:46:08 2024 00:10:17.631 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:10:17.631 slat (nsec): min=1110, max=19860k, avg=138184.87, stdev=956598.24 00:10:17.631 clat (msec): min=8, max=103, avg=20.22, stdev=14.22 00:10:17.631 lat (msec): min=8, max=108, avg=20.35, stdev=14.29 00:10:17.631 clat percentiles (msec): 00:10:17.631 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:10:17.631 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 16], 00:10:17.631 | 70.00th=[ 20], 80.00th=[ 27], 90.00th=[ 41], 95.00th=[ 53], 00:10:17.631 | 99.00th=[ 77], 99.50th=[ 77], 99.90th=[ 96], 99.95th=[ 96], 00:10:17.631 | 99.99th=[ 104] 00:10:17.631 write: IOPS=3306, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1004msec); 0 zone resets 00:10:17.631 slat (nsec): min=1895, max=28186k, avg=169217.74, stdev=1029423.56 00:10:17.631 clat (usec): min=2972, max=70940, avg=19297.82, stdev=8670.16 00:10:17.631 lat (usec): min=8074, max=87879, avg=19467.04, stdev=8784.37 00:10:17.631 clat percentiles (usec): 00:10:17.631 | 1.00th=[ 8717], 5.00th=[10421], 10.00th=[11076], 20.00th=[11994], 00:10:17.631 | 30.00th=[12649], 40.00th=[15401], 50.00th=[17433], 60.00th=[19530], 00:10:17.631 | 70.00th=[21890], 80.00th=[24511], 90.00th=[32113], 95.00th=[38011], 00:10:17.631 | 99.00th=[51119], 99.50th=[58459], 99.90th=[59507], 99.95th=[59507], 00:10:17.631 | 99.99th=[70779] 00:10:17.631 bw ( KiB/s): min=12288, max=13256, per=19.12%, avg=12772.00, stdev=684.48, samples=2 00:10:17.631 iops : min= 3072, max= 3314, avg=3193.00, stdev=171.12, samples=2 00:10:17.631 lat (msec) : 4=0.02%, 10=3.93%, 20=61.42%, 50=31.30%, 100=3.32% 00:10:17.631 lat (msec) : 250=0.02% 00:10:17.631 cpu : usr=2.49%, sys=2.29%, ctx=473, majf=0, minf=1 00:10:17.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:17.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:17.632 issued rwts: total=3072,3320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.632 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:17.632 job1: (groupid=0, jobs=1): err= 0: pid=1955717: Wed Jul 24 19:46:08 2024 00:10:17.632 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:10:17.632 slat (nsec): min=1125, max=20483k, avg=103177.37, stdev=793532.36 00:10:17.632 clat (usec): min=6091, max=43078, avg=13849.63, stdev=5954.05 00:10:17.632 lat (usec): min=6092, max=43103, avg=13952.80, stdev=5992.19 00:10:17.632 clat percentiles (usec): 00:10:17.632 | 1.00th=[ 7111], 5.00th=[ 7898], 10.00th=[ 9110], 20.00th=[ 9634], 00:10:17.632 | 30.00th=[10028], 40.00th=[11076], 50.00th=[12387], 60.00th=[13698], 00:10:17.632 | 70.00th=[15139], 80.00th=[17171], 90.00th=[20055], 95.00th=[25297], 00:10:17.632 | 99.00th=[38536], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:10:17.632 | 99.99th=[43254] 00:10:17.632 write: IOPS=4746, BW=18.5MiB/s (19.4MB/s)(18.7MiB/1007msec); 0 zone resets 00:10:17.632 slat (nsec): min=1908, max=15600k, avg=105933.91, stdev=572526.42 00:10:17.632 clat (usec): min=1616, max=28221, avg=13353.48, stdev=4745.46 00:10:17.632 lat (usec): min=1629, max=28234, avg=13459.41, stdev=4762.74 00:10:17.632 clat percentiles (usec): 00:10:17.632 | 1.00th=[ 4359], 5.00th=[ 6063], 10.00th=[ 7701], 20.00th=[ 9110], 00:10:17.632 | 30.00th=[10552], 40.00th=[11731], 50.00th=[13173], 60.00th=[14484], 00:10:17.632 | 70.00th=[15795], 80.00th=[17171], 90.00th=[19268], 95.00th=[22414], 00:10:17.632 | 99.00th=[25035], 99.50th=[25822], 99.90th=[26870], 99.95th=[27395], 00:10:17.632 | 99.99th=[28181] 00:10:17.632 bw ( KiB/s): min=16744, max=20480, per=27.87%, avg=18612.00, stdev=2641.75, samples=2 00:10:17.632 iops : min= 4186, max= 5120, avg=4653.00, stdev=660.44, samples=2 00:10:17.632 lat (msec) : 2=0.02%, 4=0.35%, 10=27.25%, 20=63.05%, 50=9.33% 00:10:17.632 cpu : usr=2.49%, sys=3.48%, ctx=611, majf=0, minf=1 00:10:17.632 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:17.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:17.632 issued rwts: total=4608,4780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.632 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:17.632 job2: (groupid=0, jobs=1): err= 0: pid=1955718: Wed Jul 24 19:46:08 2024 00:10:17.632 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:10:17.632 slat (nsec): min=1032, max=15288k, avg=81015.52, stdev=637405.51 00:10:17.632 clat (usec): min=3685, max=43758, avg=13197.84, stdev=3759.97 00:10:17.632 lat (usec): min=3695, max=59037, avg=13278.85, stdev=3811.57 00:10:17.632 clat percentiles (usec): 00:10:17.632 | 1.00th=[ 5473], 5.00th=[ 7504], 10.00th=[ 8848], 20.00th=[10552], 00:10:17.632 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12911], 60.00th=[13173], 00:10:17.632 | 70.00th=[13960], 80.00th=[15795], 90.00th=[18220], 95.00th=[21103], 00:10:17.632 | 99.00th=[22938], 99.50th=[22938], 99.90th=[27132], 99.95th=[27132], 00:10:17.632 | 99.99th=[43779] 00:10:17.632 write: IOPS=4599, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:10:17.632 slat (nsec): min=1956, max=18500k, avg=104071.75, stdev=725610.53 00:10:17.632 clat (usec): min=1879, max=41924, avg=14441.36, stdev=6186.33 00:10:17.632 lat (usec): min=1887, max=41928, avg=14545.43, stdev=6210.82 00:10:17.632 clat percentiles (usec): 00:10:17.632 | 1.00th=[ 3720], 5.00th=[ 6128], 10.00th=[ 7701], 20.00th=[ 9241], 00:10:17.632 | 30.00th=[10421], 40.00th=[12256], 50.00th=[13698], 60.00th=[15139], 00:10:17.632 | 70.00th=[16909], 80.00th=[18744], 90.00th=[22152], 95.00th=[25035], 00:10:17.632 | 99.00th=[31327], 99.50th=[39584], 99.90th=[40633], 99.95th=[40633], 00:10:17.632 | 99.99th=[41681] 00:10:17.632 bw ( KiB/s): min=16384, max=20480, per=27.60%, avg=18432.00, stdev=2896.31, samples=2 00:10:17.632 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:17.632 lat (msec) : 2=0.03%, 4=0.83%, 10=21.15%, 20=66.43%, 50=11.55% 00:10:17.632 cpu : usr=2.69%, sys=3.99%, ctx=513, majf=0, minf=1 00:10:17.632 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:17.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:17.632 issued rwts: total=4608,4618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.632 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:17.632 job3: (groupid=0, jobs=1): err= 0: pid=1955719: Wed Jul 24 19:46:08 2024 00:10:17.632 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1006msec) 00:10:17.632 slat (nsec): min=1164, max=13465k, avg=91809.66, stdev=681711.02 00:10:17.632 clat (usec): min=1649, max=38265, avg=15002.97, stdev=5570.18 00:10:17.632 lat (usec): min=1663, max=38328, avg=15094.78, stdev=5622.24 00:10:17.632 clat percentiles (usec): 00:10:17.632 | 1.00th=[ 3818], 5.00th=[ 6587], 10.00th=[ 9241], 20.00th=[11469], 00:10:17.632 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13698], 60.00th=[14746], 00:10:17.632 | 70.00th=[16188], 80.00th=[18482], 90.00th=[23200], 95.00th=[26346], 00:10:17.632 | 99.00th=[32637], 99.50th=[32900], 99.90th=[36439], 99.95th=[36439], 00:10:17.632 | 99.99th=[38011] 00:10:17.632 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:10:17.632 slat (usec): min=2, max=23197, avg=101.35, stdev=754.99 00:10:17.632 clat (usec): min=1002, max=61109, avg=16122.39, stdev=9110.09 00:10:17.632 lat (usec): min=1033, max=61114, avg=16223.73, stdev=9121.01 00:10:17.632 clat percentiles (usec): 00:10:17.632 | 1.00th=[ 4359], 5.00th=[ 6718], 10.00th=[ 7963], 20.00th=[ 9634], 00:10:17.632 | 30.00th=[10945], 40.00th=[12387], 50.00th=[13960], 60.00th=[16188], 00:10:17.632 | 70.00th=[17433], 80.00th=[20841], 90.00th=[25035], 95.00th=[35914], 00:10:17.632 | 99.00th=[57934], 99.50th=[60031], 99.90th=[61080], 99.95th=[61080], 00:10:17.632 | 99.99th=[61080] 00:10:17.632 bw ( KiB/s): min=15512, max=17256, per=24.53%, avg=16384.00, stdev=1233.19, samples=2 00:10:17.632 iops : min= 3878, max= 4314, avg=4096.00, stdev=308.30, samples=2 00:10:17.632 lat (msec) : 2=0.11%, 4=0.88%, 10=16.09%, 20=64.43%, 50=17.71% 00:10:17.632 lat (msec) : 100=0.78% 00:10:17.632 cpu : usr=2.89%, sys=5.17%, ctx=596, majf=0, minf=1 00:10:17.632 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:17.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:17.632 issued rwts: total=4084,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.632 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:17.632 00:10:17.632 Run status group 0 (all jobs): 00:10:17.632 READ: bw=63.5MiB/s (66.6MB/s), 12.0MiB/s-17.9MiB/s (12.5MB/s-18.8MB/s), io=64.0MiB (67.1MB), run=1004-1007msec 00:10:17.632 WRITE: bw=65.2MiB/s (68.4MB/s), 12.9MiB/s-18.5MiB/s (13.5MB/s-19.4MB/s), io=65.7MiB (68.9MB), run=1004-1007msec 00:10:17.632 00:10:17.632 Disk stats (read/write): 00:10:17.632 nvme0n1: ios=2741/3072, merge=0/0, ticks=17723/19511, in_queue=37234, util=84.45% 00:10:17.632 nvme0n2: ios=3842/4096, merge=0/0, ticks=43206/50192, in_queue=93398, util=87.13% 00:10:17.632 nvme0n3: ios=3608/4089, merge=0/0, ticks=47551/52845, in_queue=100396, util=92.70% 00:10:17.632 nvme0n4: ios=3197/3584, merge=0/0, ticks=41488/45992, in_queue=87480, util=93.93% 00:10:17.632 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:17.632 [global] 00:10:17.632 thread=1 00:10:17.632 invalidate=1 00:10:17.632 rw=randwrite 00:10:17.632 time_based=1 00:10:17.632 runtime=1 00:10:17.632 ioengine=libaio 00:10:17.632 direct=1 00:10:17.632 bs=4096 00:10:17.632 iodepth=128 00:10:17.632 norandommap=0 00:10:17.632 numjobs=1 00:10:17.632 00:10:17.632 verify_dump=1 00:10:17.632 verify_backlog=512 00:10:17.632 verify_state_save=0 00:10:17.632 do_verify=1 00:10:17.632 verify=crc32c-intel 00:10:17.632 [job0] 00:10:17.632 filename=/dev/nvme0n1 00:10:17.632 [job1] 00:10:17.632 filename=/dev/nvme0n2 00:10:17.632 [job2] 00:10:17.632 filename=/dev/nvme0n3 00:10:17.632 [job3] 00:10:17.632 filename=/dev/nvme0n4 00:10:17.632 Could not set queue depth (nvme0n1) 00:10:17.632 Could not set queue depth (nvme0n2) 00:10:17.632 Could not set queue depth (nvme0n3) 00:10:17.632 Could not set queue depth (nvme0n4) 00:10:17.891 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:17.891 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:17.891 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:17.891 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:17.891 fio-3.35 00:10:17.891 Starting 4 threads 00:10:19.268 00:10:19.268 job0: (groupid=0, jobs=1): err= 0: pid=1956094: Wed Jul 24 19:46:10 2024 00:10:19.268 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:10:19.268 slat (nsec): min=1102, max=52127k, avg=133241.86, stdev=1324528.58 00:10:19.268 clat (msec): min=6, max=110, avg=16.71, stdev=13.71 00:10:19.268 lat (msec): min=6, max=110, avg=16.84, stdev=13.80 00:10:19.268 clat percentiles (msec): 00:10:19.268 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:10:19.268 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 16], 00:10:19.268 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 23], 95.00th=[ 31], 00:10:19.268 | 99.00th=[ 101], 99.50th=[ 102], 99.90th=[ 102], 99.95th=[ 102], 00:10:19.268 | 99.99th=[ 111] 00:10:19.268 write: IOPS=3980, BW=15.5MiB/s (16.3MB/s)(15.7MiB/1009msec); 0 zone resets 00:10:19.268 slat (nsec): min=1886, max=8836.1k, avg=125936.03, stdev=511643.53 00:10:19.268 clat (msec): min=6, max=101, avg=16.69, stdev= 7.46 00:10:19.268 lat (msec): min=7, max=101, avg=16.82, stdev= 7.48 00:10:19.268 clat percentiles (msec): 00:10:19.268 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:10:19.268 | 30.00th=[ 12], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 19], 00:10:19.268 | 70.00th=[ 21], 80.00th=[ 22], 90.00th=[ 23], 95.00th=[ 26], 00:10:19.268 | 99.00th=[ 52], 99.50th=[ 55], 99.90th=[ 56], 99.95th=[ 102], 00:10:19.268 | 99.99th=[ 102] 00:10:19.268 bw ( KiB/s): min=13128, max=17984, per=25.77%, avg=15556.00, stdev=3433.71, samples=2 00:10:19.268 iops : min= 3282, max= 4496, avg=3889.00, stdev=858.43, samples=2 00:10:19.268 lat (msec) : 10=17.42%, 20=57.96%, 50=22.26%, 100=1.79%, 250=0.57% 00:10:19.268 cpu : usr=2.08%, sys=2.58%, ctx=702, majf=0, minf=1 00:10:19.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:19.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.268 issued rwts: total=3584,4016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.268 job1: (groupid=0, jobs=1): err= 0: pid=1956095: Wed Jul 24 19:46:10 2024 00:10:19.268 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:10:19.268 slat (nsec): min=1068, max=18534k, avg=105684.76, stdev=622582.79 00:10:19.268 clat (usec): min=7934, max=31425, avg=13844.02, stdev=4468.41 00:10:19.268 lat (usec): min=8151, max=33099, avg=13949.71, stdev=4497.04 00:10:19.268 clat percentiles (usec): 00:10:19.268 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10290], 00:10:19.268 | 30.00th=[10814], 40.00th=[11731], 50.00th=[12387], 60.00th=[13304], 00:10:19.268 | 70.00th=[15139], 80.00th=[17433], 90.00th=[19792], 95.00th=[21365], 00:10:19.268 | 99.00th=[29492], 99.50th=[29754], 99.90th=[31327], 99.95th=[31327], 00:10:19.268 | 99.99th=[31327] 00:10:19.268 write: IOPS=4528, BW=17.7MiB/s (18.5MB/s)(17.8MiB/1009msec); 0 zone resets 00:10:19.268 slat (nsec): min=1875, max=10242k, avg=120223.59, stdev=565012.50 00:10:19.268 clat (usec): min=6332, max=33321, avg=15280.73, stdev=5860.26 00:10:19.268 lat (usec): min=6337, max=33325, avg=15400.95, stdev=5900.40 00:10:19.268 clat percentiles (usec): 00:10:19.268 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:10:19.268 | 30.00th=[10814], 40.00th=[12256], 50.00th=[13566], 60.00th=[15664], 00:10:19.268 | 70.00th=[17433], 80.00th=[20317], 90.00th=[24511], 95.00th=[27919], 00:10:19.268 | 99.00th=[30278], 99.50th=[31065], 99.90th=[32375], 99.95th=[32375], 00:10:19.268 | 99.99th=[33424] 00:10:19.268 bw ( KiB/s): min=15680, max=19848, per=29.43%, avg=17764.00, stdev=2947.22, samples=2 00:10:19.268 iops : min= 3920, max= 4962, avg=4441.00, stdev=736.81, samples=2 00:10:19.268 lat (msec) : 10=18.28%, 20=66.95%, 50=14.77% 00:10:19.268 cpu : usr=2.58%, sys=2.38%, ctx=684, majf=0, minf=1 00:10:19.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:19.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.268 issued rwts: total=4096,4569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.268 job2: (groupid=0, jobs=1): err= 0: pid=1956096: Wed Jul 24 19:46:10 2024 00:10:19.268 read: IOPS=3495, BW=13.7MiB/s (14.3MB/s)(13.8MiB/1010msec) 00:10:19.268 slat (nsec): min=1221, max=19914k, avg=144159.45, stdev=926755.49 00:10:19.268 clat (usec): min=4236, max=44678, avg=18763.12, stdev=8407.10 00:10:19.268 lat (usec): min=7093, max=45184, avg=18907.27, stdev=8472.24 00:10:19.268 clat percentiles (usec): 00:10:19.268 | 1.00th=[ 7177], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[11600], 00:10:19.268 | 30.00th=[12911], 40.00th=[13698], 50.00th=[15270], 60.00th=[18220], 00:10:19.268 | 70.00th=[22938], 80.00th=[29230], 90.00th=[31589], 95.00th=[33817], 00:10:19.268 | 99.00th=[38536], 99.50th=[39060], 99.90th=[42730], 99.95th=[44303], 00:10:19.268 | 99.99th=[44827] 00:10:19.268 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:10:19.268 slat (usec): min=2, max=11962, avg=131.46, stdev=695.48 00:10:19.268 clat (usec): min=4908, max=44378, avg=17111.22, stdev=7687.68 00:10:19.268 lat (usec): min=4975, max=44382, avg=17242.68, stdev=7728.82 00:10:19.268 clat percentiles (usec): 00:10:19.268 | 1.00th=[ 7111], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[11207], 00:10:19.268 | 30.00th=[12387], 40.00th=[13042], 50.00th=[14484], 60.00th=[16319], 00:10:19.268 | 70.00th=[19530], 80.00th=[21890], 90.00th=[27657], 95.00th=[34866], 00:10:19.268 | 99.00th=[41681], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:10:19.268 | 99.99th=[44303] 00:10:19.268 bw ( KiB/s): min=12288, max=16384, per=23.75%, avg=14336.00, stdev=2896.31, samples=2 00:10:19.268 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:10:19.268 lat (msec) : 10=10.26%, 20=57.60%, 50=32.13% 00:10:19.268 cpu : usr=2.38%, sys=3.37%, ctx=419, majf=0, minf=1 00:10:19.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:19.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.268 issued rwts: total=3530,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.268 job3: (groupid=0, jobs=1): err= 0: pid=1956097: Wed Jul 24 19:46:10 2024 00:10:19.268 read: IOPS=2644, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:10:19.268 slat (nsec): min=1103, max=22448k, avg=190753.23, stdev=1306563.26 00:10:19.268 clat (usec): min=471, max=67145, avg=23726.40, stdev=12322.16 00:10:19.268 lat (usec): min=1818, max=67151, avg=23917.16, stdev=12433.25 00:10:19.268 clat percentiles (usec): 00:10:19.268 | 1.00th=[ 4047], 5.00th=[10683], 10.00th=[12649], 20.00th=[13960], 00:10:19.268 | 30.00th=[15270], 40.00th=[17695], 50.00th=[20317], 60.00th=[22676], 00:10:19.268 | 70.00th=[26084], 80.00th=[32900], 90.00th=[44827], 95.00th=[51643], 00:10:19.268 | 99.00th=[56361], 99.50th=[56361], 99.90th=[63701], 99.95th=[64226], 00:10:19.268 | 99.99th=[67634] 00:10:19.268 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:19.268 slat (nsec): min=1788, max=12691k, avg=155659.82, stdev=851682.46 00:10:19.268 clat (usec): min=8272, max=53919, avg=20385.42, stdev=6969.60 00:10:19.268 lat (usec): min=8281, max=53932, avg=20541.08, stdev=7002.24 00:10:19.268 clat percentiles (usec): 00:10:19.268 | 1.00th=[ 9634], 5.00th=[10683], 10.00th=[12256], 20.00th=[14091], 00:10:19.268 | 30.00th=[16188], 40.00th=[18220], 50.00th=[19792], 60.00th=[21365], 00:10:19.268 | 70.00th=[22676], 80.00th=[25297], 90.00th=[29230], 95.00th=[31851], 00:10:19.269 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47973], 99.95th=[47973], 00:10:19.269 | 99.99th=[53740] 00:10:19.269 bw ( KiB/s): min=10952, max=10952, per=18.14%, avg=10952.00, stdev= 0.00, samples=1 00:10:19.269 iops : min= 2738, max= 2738, avg=2738.00, stdev= 0.00, samples=1 00:10:19.269 lat (usec) : 500=0.02% 00:10:19.269 lat (msec) : 2=0.09%, 4=0.28%, 10=3.23%, 20=46.83%, 50=46.72% 00:10:19.269 lat (msec) : 100=2.83% 00:10:19.269 cpu : usr=1.80%, sys=2.70%, ctx=389, majf=0, minf=1 00:10:19.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:19.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.269 issued rwts: total=2647,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.269 00:10:19.269 Run status group 0 (all jobs): 00:10:19.269 READ: bw=53.6MiB/s (56.2MB/s), 10.3MiB/s-15.9MiB/s (10.8MB/s-16.6MB/s), io=54.1MiB (56.8MB), run=1001-1010msec 00:10:19.269 WRITE: bw=58.9MiB/s (61.8MB/s), 12.0MiB/s-17.7MiB/s (12.6MB/s-18.5MB/s), io=59.5MiB (62.4MB), run=1001-1010msec 00:10:19.269 00:10:19.269 Disk stats (read/write): 00:10:19.269 nvme0n1: ios=3114/3182, merge=0/0, ticks=24501/23515, in_queue=48016, util=88.68% 00:10:19.269 nvme0n2: ios=3606/3961, merge=0/0, ticks=17732/18023, in_queue=35755, util=98.78% 00:10:19.269 nvme0n3: ios=3093/3367, merge=0/0, ticks=44483/43612, in_queue=88095, util=93.56% 00:10:19.269 nvme0n4: ios=2232/2560, merge=0/0, ticks=22374/18135, in_queue=40509, util=98.43% 00:10:19.269 19:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:19.269 19:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1956323 00:10:19.269 19:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:19.269 19:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:19.269 [global] 00:10:19.269 thread=1 00:10:19.269 invalidate=1 00:10:19.269 rw=read 00:10:19.269 time_based=1 00:10:19.269 runtime=10 00:10:19.269 ioengine=libaio 00:10:19.269 direct=1 00:10:19.269 bs=4096 00:10:19.269 iodepth=1 00:10:19.269 norandommap=1 00:10:19.269 numjobs=1 00:10:19.269 00:10:19.269 [job0] 00:10:19.269 filename=/dev/nvme0n1 00:10:19.269 [job1] 00:10:19.269 filename=/dev/nvme0n2 00:10:19.269 [job2] 00:10:19.269 filename=/dev/nvme0n3 00:10:19.269 [job3] 00:10:19.269 filename=/dev/nvme0n4 00:10:19.269 Could not set queue depth (nvme0n1) 00:10:19.269 Could not set queue depth (nvme0n2) 00:10:19.269 Could not set queue depth (nvme0n3) 00:10:19.269 Could not set queue depth (nvme0n4) 00:10:19.269 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.269 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.269 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.269 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.269 fio-3.35 00:10:19.269 Starting 4 threads 00:10:22.557 19:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:22.557 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3325952, buflen=4096 00:10:22.558 fio: pid=1956469, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:22.558 19:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:22.558 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=438272, buflen=4096 00:10:22.558 fio: pid=1956467, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:22.558 19:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:22.558 19:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:22.558 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=12759040, buflen=4096 00:10:22.558 fio: pid=1956465, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:22.558 19:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:22.558 19:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:22.818 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=22786048, buflen=4096 00:10:22.818 fio: pid=1956466, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:22.818 19:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:22.818 19:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:22.818 00:10:22.818 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1956465: Wed Jul 24 19:46:14 2024 00:10:22.818 read: IOPS=1021, BW=4084KiB/s (4182kB/s)(12.2MiB/3051msec) 00:10:22.818 slat (usec): min=6, max=16891, avg=25.87, stdev=511.78 00:10:22.818 clat (usec): min=357, max=42946, avg=951.22, stdev=4427.74 00:10:22.818 lat (usec): min=364, max=42967, avg=977.10, stdev=4458.09 00:10:22.818 clat percentiles (usec): 00:10:22.818 | 1.00th=[ 371], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 388], 00:10:22.818 | 30.00th=[ 396], 40.00th=[ 416], 50.00th=[ 453], 60.00th=[ 461], 00:10:22.818 | 70.00th=[ 469], 80.00th=[ 510], 90.00th=[ 578], 95.00th=[ 799], 00:10:22.818 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:22.818 | 99.99th=[42730] 00:10:22.818 bw ( KiB/s): min= 96, max= 8432, per=28.60%, avg=3372.40, stdev=3493.17, samples=5 00:10:22.818 iops : min= 24, max= 2108, avg=843.00, stdev=873.25, samples=5 00:10:22.818 lat (usec) : 500=75.74%, 750=18.97%, 1000=3.05% 00:10:22.818 lat (msec) : 2=0.96%, 4=0.03%, 10=0.03%, 20=0.03%, 50=1.16% 00:10:22.818 cpu : usr=0.56%, sys=1.64%, ctx=3121, majf=0, minf=1 00:10:22.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.818 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.818 issued rwts: total=3116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.818 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1956466: Wed Jul 24 19:46:14 2024 00:10:22.818 read: IOPS=1708, BW=6834KiB/s (6998kB/s)(21.7MiB/3256msec) 00:10:22.818 slat (usec): min=2, max=15353, avg=20.40, stdev=419.94 00:10:22.818 clat (usec): min=338, max=50478, avg=562.85, stdev=1306.77 00:10:22.818 lat (usec): min=346, max=50503, avg=581.66, stdev=1369.50 00:10:22.818 clat percentiles (usec): 00:10:22.818 | 1.00th=[ 367], 5.00th=[ 375], 10.00th=[ 383], 20.00th=[ 392], 00:10:22.818 | 30.00th=[ 453], 40.00th=[ 465], 50.00th=[ 494], 60.00th=[ 515], 00:10:22.818 | 70.00th=[ 529], 80.00th=[ 553], 90.00th=[ 725], 95.00th=[ 930], 00:10:22.818 | 99.00th=[ 1074], 99.50th=[ 1205], 99.90th=[ 4015], 99.95th=[42206], 00:10:22.818 | 99.99th=[50594] 00:10:22.818 bw ( KiB/s): min= 5976, max= 8592, per=60.89%, avg=7179.83, stdev=883.56, samples=6 00:10:22.818 iops : min= 1494, max= 2148, avg=1794.83, stdev=220.94, samples=6 00:10:22.818 lat (usec) : 500=51.71%, 750=39.38%, 1000=6.02% 00:10:22.818 lat (msec) : 2=2.75%, 4=0.02%, 10=0.02%, 50=0.07%, 100=0.02% 00:10:22.818 cpu : usr=1.01%, sys=2.64%, ctx=5570, majf=0, minf=1 00:10:22.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.818 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.818 issued rwts: total=5564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.818 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1956467: Wed Jul 24 19:46:14 2024 00:10:22.818 read: IOPS=37, BW=148KiB/s (152kB/s)(428KiB/2887msec) 00:10:22.818 slat (nsec): min=8140, max=30691, avg=16365.81, stdev=6204.79 00:10:22.818 clat (usec): min=621, max=43089, avg=26951.13, stdev=19934.82 00:10:22.818 lat (usec): min=630, max=43097, avg=26967.44, stdev=19938.48 00:10:22.818 clat percentiles (usec): 00:10:22.818 | 1.00th=[ 619], 5.00th=[ 644], 10.00th=[ 693], 20.00th=[ 709], 00:10:22.818 | 30.00th=[ 857], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:10:22.818 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:22.818 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:22.818 | 99.99th=[43254] 00:10:22.818 bw ( KiB/s): min= 88, max= 376, per=1.31%, avg=155.00, stdev=123.84, samples=5 00:10:22.818 iops : min= 22, max= 94, avg=38.60, stdev=31.03, samples=5 00:10:22.818 lat (usec) : 750=26.85%, 1000=6.48% 00:10:22.818 lat (msec) : 2=2.78%, 50=62.96% 00:10:22.818 cpu : usr=0.00%, sys=0.10%, ctx=108, majf=0, minf=1 00:10:22.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.818 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.818 issued rwts: total=108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.818 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1956469: Wed Jul 24 19:46:14 2024 00:10:22.818 read: IOPS=301, BW=1206KiB/s (1235kB/s)(3248KiB/2693msec) 00:10:22.818 slat (nsec): min=7745, max=29858, avg=9753.60, stdev=3662.95 00:10:22.818 clat (usec): min=423, max=43019, avg=3303.02, stdev=10226.58 00:10:22.818 lat (usec): min=432, max=43043, avg=3312.76, stdev=10229.89 00:10:22.818 clat percentiles (usec): 00:10:22.818 | 1.00th=[ 437], 5.00th=[ 461], 10.00th=[ 537], 20.00th=[ 545], 00:10:22.818 | 30.00th=[ 553], 40.00th=[ 562], 50.00th=[ 570], 60.00th=[ 578], 00:10:22.818 | 70.00th=[ 594], 80.00th=[ 685], 90.00th=[ 971], 95.00th=[41681], 00:10:22.818 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:10:22.818 | 99.99th=[43254] 00:10:22.818 bw ( KiB/s): min= 96, max= 6059, per=10.92%, avg=1288.60, stdev=2666.73, samples=5 00:10:22.818 iops : min= 24, max= 1514, avg=322.00, stdev=666.35, samples=5 00:10:22.818 lat (usec) : 500=7.01%, 750=76.88%, 1000=6.27% 00:10:22.818 lat (msec) : 2=3.20%, 50=6.52% 00:10:22.818 cpu : usr=0.11%, sys=0.63%, ctx=814, majf=0, minf=2 00:10:22.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.818 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.818 issued rwts: total=813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.818 00:10:22.818 Run status group 0 (all jobs): 00:10:22.818 READ: bw=11.5MiB/s (12.1MB/s), 148KiB/s-6834KiB/s (152kB/s-6998kB/s), io=37.5MiB (39.3MB), run=2693-3256msec 00:10:22.818 00:10:22.818 Disk stats (read/write): 00:10:22.818 nvme0n1: ios=2918/0, merge=0/0, ticks=2800/0, in_queue=2800, util=95.36% 00:10:22.818 nvme0n2: ios=5597/0, merge=0/0, ticks=3890/0, in_queue=3890, util=98.33% 00:10:22.818 nvme0n3: ios=106/0, merge=0/0, ticks=2844/0, in_queue=2844, util=96.52% 00:10:22.818 nvme0n4: ios=850/0, merge=0/0, ticks=3416/0, in_queue=3416, util=99.04% 00:10:23.078 19:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.078 19:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:23.078 19:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.078 19:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:23.337 19:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.337 19:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:23.597 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.597 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:23.857 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:23.857 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1956323 00:10:23.857 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:23.857 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:23.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.857 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:23.857 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:23.857 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:23.857 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.857 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:23.857 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.857 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:23.857 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:23.857 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:23.857 nvmf hotplug test: fio failed as expected 00:10:23.857 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:24.117 rmmod nvme_tcp 00:10:24.117 rmmod nvme_fabrics 00:10:24.117 rmmod nvme_keyring 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1953380 ']' 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1953380 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1953380 ']' 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1953380 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1953380 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1953380' 00:10:24.117 killing process with pid 1953380 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1953380 00:10:24.117 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1953380 00:10:24.377 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:24.377 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:24.377 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:24.377 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:24.377 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:24.377 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.377 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.377 19:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.287 19:46:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:26.547 00:10:26.547 real 0m26.377s 00:10:26.547 user 1m47.102s 00:10:26.547 sys 0m7.588s 00:10:26.547 19:46:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.547 19:46:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.547 ************************************ 00:10:26.547 END TEST nvmf_fio_target 00:10:26.547 ************************************ 00:10:26.547 19:46:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:26.547 19:46:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:26.547 19:46:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.547 19:46:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:26.547 ************************************ 00:10:26.547 START TEST nvmf_bdevio 00:10:26.547 ************************************ 00:10:26.547 19:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:26.547 * Looking for test storage... 00:10:26.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:26.547 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:26.548 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:26.548 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:26.548 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:26.548 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:26.548 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.548 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:26.548 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:26.548 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:26.548 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.548 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.548 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.548 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:26.548 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:26.548 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:26.548 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.840 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:31.841 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:31.841 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:31.841 Found net devices under 0000:86:00.0: cvl_0_0 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:31.841 Found net devices under 0000:86:00.1: cvl_0_1 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:31.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:10:31.841 00:10:31.841 --- 10.0.0.2 ping statistics --- 00:10:31.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.841 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.387 ms 00:10:31.841 00:10:31.841 --- 10.0.0.1 ping statistics --- 00:10:31.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.841 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1960699 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1960699 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1960699 ']' 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.841 19:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:31.842 [2024-07-24 19:46:23.384826] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:10:31.842 [2024-07-24 19:46:23.384869] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.842 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.100 [2024-07-24 19:46:23.441646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.100 [2024-07-24 19:46:23.522126] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.100 [2024-07-24 19:46:23.522162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.100 [2024-07-24 19:46:23.522170] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.100 [2024-07-24 19:46:23.522176] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.100 [2024-07-24 19:46:23.522181] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.100 [2024-07-24 19:46:23.522309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:32.100 [2024-07-24 19:46:23.522426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:32.100 [2024-07-24 19:46:23.522532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.100 [2024-07-24 19:46:23.522534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.666 [2024-07-24 19:46:24.226480] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.666 Malloc0 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.666 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.924 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.924 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.924 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.924 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.924 [2024-07-24 19:46:24.269855] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.924 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.924 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:32.924 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:32.924 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:32.924 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:32.925 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:32.925 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:32.925 { 00:10:32.925 "params": { 00:10:32.925 "name": "Nvme$subsystem", 00:10:32.925 "trtype": "$TEST_TRANSPORT", 00:10:32.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:32.925 "adrfam": "ipv4", 00:10:32.925 "trsvcid": "$NVMF_PORT", 00:10:32.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:32.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:32.925 "hdgst": ${hdgst:-false}, 00:10:32.925 "ddgst": ${ddgst:-false} 00:10:32.925 }, 00:10:32.925 "method": "bdev_nvme_attach_controller" 00:10:32.925 } 00:10:32.925 EOF 00:10:32.925 )") 00:10:32.925 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:32.925 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:32.925 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:32.925 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:32.925 "params": { 00:10:32.925 "name": "Nvme1", 00:10:32.925 "trtype": "tcp", 00:10:32.925 "traddr": "10.0.0.2", 00:10:32.925 "adrfam": "ipv4", 00:10:32.925 "trsvcid": "4420", 00:10:32.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:32.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:32.925 "hdgst": false, 00:10:32.925 "ddgst": false 00:10:32.925 }, 00:10:32.925 "method": "bdev_nvme_attach_controller" 00:10:32.925 }' 00:10:32.925 [2024-07-24 19:46:24.316663] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:10:32.925 [2024-07-24 19:46:24.316707] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1960950 ] 00:10:32.925 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.925 [2024-07-24 19:46:24.371761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:32.925 [2024-07-24 19:46:24.447142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.925 [2024-07-24 19:46:24.447237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.925 [2024-07-24 19:46:24.447238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.182 I/O targets: 00:10:33.182 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:33.182 00:10:33.182 00:10:33.182 CUnit - A unit testing framework for C - Version 2.1-3 00:10:33.182 http://cunit.sourceforge.net/ 00:10:33.182 00:10:33.182 00:10:33.182 Suite: bdevio tests on: Nvme1n1 00:10:33.182 Test: blockdev write read block ...passed 00:10:33.182 Test: blockdev write zeroes read block ...passed 00:10:33.182 Test: blockdev write zeroes read no split ...passed 00:10:33.182 Test: blockdev write zeroes read split ...passed 00:10:33.439 Test: blockdev write zeroes read split partial ...passed 00:10:33.440 Test: blockdev reset ...[2024-07-24 19:46:24.837988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:33.440 [2024-07-24 19:46:24.838052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0f6d0 (9): Bad file descriptor 00:10:33.440 [2024-07-24 19:46:24.852262] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:33.440 passed 00:10:33.440 Test: blockdev write read 8 blocks ...passed 00:10:33.440 Test: blockdev write read size > 128k ...passed 00:10:33.440 Test: blockdev write read invalid size ...passed 00:10:33.440 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:33.440 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:33.440 Test: blockdev write read max offset ...passed 00:10:33.440 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:33.440 Test: blockdev writev readv 8 blocks ...passed 00:10:33.440 Test: blockdev writev readv 30 x 1block ...passed 00:10:33.698 Test: blockdev writev readv block ...passed 00:10:33.698 Test: blockdev writev readv size > 128k ...passed 00:10:33.698 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:33.698 Test: blockdev comparev and writev ...[2024-07-24 19:46:25.086329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.698 [2024-07-24 19:46:25.086357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:33.698 [2024-07-24 19:46:25.086371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.698 [2024-07-24 19:46:25.086378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:33.698 [2024-07-24 19:46:25.086833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.698 [2024-07-24 19:46:25.086844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:33.698 [2024-07-24 19:46:25.086856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.698 [2024-07-24 19:46:25.086863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:33.698 [2024-07-24 19:46:25.087301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.698 [2024-07-24 19:46:25.087312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:33.698 [2024-07-24 19:46:25.087324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.698 [2024-07-24 19:46:25.087331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:33.698 [2024-07-24 19:46:25.087781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.698 [2024-07-24 19:46:25.087792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:33.698 [2024-07-24 19:46:25.087807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.698 [2024-07-24 19:46:25.087814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:33.698 passed 00:10:33.698 Test: blockdev nvme passthru rw ...passed 00:10:33.698 Test: blockdev nvme passthru vendor specific ...[2024-07-24 19:46:25.171940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.698 [2024-07-24 19:46:25.171955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:33.698 [2024-07-24 19:46:25.172308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.698 [2024-07-24 19:46:25.172319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:33.698 [2024-07-24 19:46:25.172672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.698 [2024-07-24 19:46:25.172682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:33.698 [2024-07-24 19:46:25.173036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.698 [2024-07-24 19:46:25.173050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:33.698 passed 00:10:33.698 Test: blockdev nvme admin passthru ...passed 00:10:33.698 Test: blockdev copy ...passed 00:10:33.698 00:10:33.698 Run Summary: Type Total Ran Passed Failed Inactive 00:10:33.698 suites 1 1 n/a 0 0 00:10:33.698 tests 23 23 23 0 0 00:10:33.698 asserts 152 152 152 0 n/a 00:10:33.698 00:10:33.698 Elapsed time = 1.175 seconds 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:33.957 rmmod nvme_tcp 00:10:33.957 rmmod nvme_fabrics 00:10:33.957 rmmod nvme_keyring 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1960699 ']' 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1960699 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1960699 ']' 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1960699 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1960699 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1960699' 00:10:33.957 killing process with pid 1960699 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1960699 00:10:33.957 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1960699 00:10:34.217 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:34.217 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:34.217 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:34.217 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:34.217 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:34.217 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.217 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.217 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:36.756 00:10:36.756 real 0m9.804s 00:10:36.756 user 0m12.251s 00:10:36.756 sys 0m4.488s 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:36.756 ************************************ 00:10:36.756 END TEST nvmf_bdevio 00:10:36.756 ************************************ 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:36.756 00:10:36.756 real 4m32.469s 00:10:36.756 user 10m30.035s 00:10:36.756 sys 1m29.933s 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:36.756 ************************************ 00:10:36.756 END TEST nvmf_target_core 00:10:36.756 ************************************ 00:10:36.756 19:46:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:36.756 19:46:27 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:36.756 19:46:27 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.756 19:46:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:36.756 ************************************ 00:10:36.756 START TEST nvmf_target_extra 00:10:36.756 ************************************ 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:36.756 * Looking for test storage... 00:10:36.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.756 19:46:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:36.756 ************************************ 00:10:36.756 START TEST nvmf_example 00:10:36.756 ************************************ 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:36.756 * Looking for test storage... 00:10:36.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.756 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:10:36.757 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:42.073 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:42.074 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:42.074 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:42.074 Found net devices under 0000:86:00.0: cvl_0_0 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:42.074 Found net devices under 0000:86:00.1: cvl_0_1 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.074 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:42.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:10:42.075 00:10:42.075 --- 10.0.0.2 ping statistics --- 00:10:42.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.075 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:42.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:10:42.075 00:10:42.075 --- 10.0.0.1 ping statistics --- 00:10:42.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.075 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1964536 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1964536 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1964536 ']' 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:42.075 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:42.344 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.908 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:42.908 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:42.908 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:42.908 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:42.908 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:43.166 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:43.166 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.356 Initializing NVMe Controllers 00:10:55.356 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:55.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:55.356 Initialization complete. Launching workers. 00:10:55.356 ======================================================== 00:10:55.356 Latency(us) 00:10:55.356 Device Information : IOPS MiB/s Average min max 00:10:55.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13684.50 53.46 4678.02 720.31 18106.49 00:10:55.356 ======================================================== 00:10:55.356 Total : 13684.50 53.46 4678.02 720.31 18106.49 00:10:55.356 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:55.356 rmmod nvme_tcp 00:10:55.356 rmmod nvme_fabrics 00:10:55.356 rmmod nvme_keyring 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1964536 ']' 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1964536 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1964536 ']' 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1964536 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:55.356 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1964536 00:10:55.356 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:55.356 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:55.356 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1964536' 00:10:55.356 killing process with pid 1964536 00:10:55.356 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1964536 00:10:55.356 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1964536 00:10:55.356 nvmf threads initialize successfully 00:10:55.356 bdev subsystem init successfully 00:10:55.356 created a nvmf target service 00:10:55.356 create targets's poll groups done 00:10:55.356 all subsystems of target started 00:10:55.356 nvmf target is running 00:10:55.356 all subsystems of target stopped 00:10:55.356 destroy targets's poll groups done 00:10:55.356 destroyed the nvmf target service 00:10:55.356 bdev subsystem finish successfully 00:10:55.356 nvmf threads destroy successfully 00:10:55.356 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:55.356 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:55.356 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:55.356 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:55.356 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:55.356 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.356 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.356 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.925 00:10:55.925 real 0m19.275s 00:10:55.925 user 0m46.384s 00:10:55.925 sys 0m5.420s 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.925 ************************************ 00:10:55.925 END TEST nvmf_example 00:10:55.925 ************************************ 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:55.925 ************************************ 00:10:55.925 START TEST nvmf_filesystem 00:10:55.925 ************************************ 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:55.925 * Looking for test storage... 00:10:55.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:55.925 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:55.926 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:55.926 #define SPDK_CONFIG_H 00:10:55.926 #define SPDK_CONFIG_APPS 1 00:10:55.927 #define SPDK_CONFIG_ARCH native 00:10:55.927 #undef SPDK_CONFIG_ASAN 00:10:55.927 #undef SPDK_CONFIG_AVAHI 00:10:55.927 #undef SPDK_CONFIG_CET 00:10:55.927 #define SPDK_CONFIG_COVERAGE 1 00:10:55.927 #define SPDK_CONFIG_CROSS_PREFIX 00:10:55.927 #undef SPDK_CONFIG_CRYPTO 00:10:55.927 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:55.927 #undef SPDK_CONFIG_CUSTOMOCF 00:10:55.927 #undef SPDK_CONFIG_DAOS 00:10:55.927 #define SPDK_CONFIG_DAOS_DIR 00:10:55.927 #define SPDK_CONFIG_DEBUG 1 00:10:55.927 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:55.927 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:55.927 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:55.927 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:55.927 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:55.927 #undef SPDK_CONFIG_DPDK_UADK 00:10:55.927 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:55.927 #define SPDK_CONFIG_EXAMPLES 1 00:10:55.927 #undef SPDK_CONFIG_FC 00:10:55.927 #define SPDK_CONFIG_FC_PATH 00:10:55.927 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:55.927 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:55.927 #undef SPDK_CONFIG_FUSE 00:10:55.927 #undef SPDK_CONFIG_FUZZER 00:10:55.927 #define SPDK_CONFIG_FUZZER_LIB 00:10:55.927 #undef SPDK_CONFIG_GOLANG 00:10:55.927 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:55.927 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:55.927 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:55.927 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:55.927 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:55.927 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:55.927 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:55.927 #define SPDK_CONFIG_IDXD 1 00:10:55.927 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:55.927 #undef SPDK_CONFIG_IPSEC_MB 00:10:55.927 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:55.927 #define SPDK_CONFIG_ISAL 1 00:10:55.927 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:55.927 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:55.927 #define SPDK_CONFIG_LIBDIR 00:10:55.927 #undef SPDK_CONFIG_LTO 00:10:55.927 #define SPDK_CONFIG_MAX_LCORES 128 00:10:55.927 #define SPDK_CONFIG_NVME_CUSE 1 00:10:55.927 #undef SPDK_CONFIG_OCF 00:10:55.927 #define SPDK_CONFIG_OCF_PATH 00:10:55.927 #define SPDK_CONFIG_OPENSSL_PATH 00:10:55.927 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:55.927 #define SPDK_CONFIG_PGO_DIR 00:10:55.927 #undef SPDK_CONFIG_PGO_USE 00:10:55.927 #define SPDK_CONFIG_PREFIX /usr/local 00:10:55.927 #undef SPDK_CONFIG_RAID5F 00:10:55.927 #undef SPDK_CONFIG_RBD 00:10:55.927 #define SPDK_CONFIG_RDMA 1 00:10:55.927 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:55.927 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:55.927 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:55.927 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:55.927 #define SPDK_CONFIG_SHARED 1 00:10:55.927 #undef SPDK_CONFIG_SMA 00:10:55.927 #define SPDK_CONFIG_TESTS 1 00:10:55.927 #undef SPDK_CONFIG_TSAN 00:10:55.927 #define SPDK_CONFIG_UBLK 1 00:10:55.927 #define SPDK_CONFIG_UBSAN 1 00:10:55.927 #undef SPDK_CONFIG_UNIT_TESTS 00:10:55.927 #undef SPDK_CONFIG_URING 00:10:55.927 #define SPDK_CONFIG_URING_PATH 00:10:55.927 #undef SPDK_CONFIG_URING_ZNS 00:10:55.927 #undef SPDK_CONFIG_USDT 00:10:55.927 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:55.927 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:55.927 #define SPDK_CONFIG_VFIO_USER 1 00:10:55.927 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:55.927 #define SPDK_CONFIG_VHOST 1 00:10:55.927 #define SPDK_CONFIG_VIRTIO 1 00:10:55.927 #undef SPDK_CONFIG_VTUNE 00:10:55.927 #define SPDK_CONFIG_VTUNE_DIR 00:10:55.927 #define SPDK_CONFIG_WERROR 1 00:10:55.927 #define SPDK_CONFIG_WPDK_DIR 00:10:55.927 #undef SPDK_CONFIG_XNVME 00:10:55.927 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:55.927 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:55.928 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:10:55.929 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:10:56.188 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:10:56.188 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:56.188 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:56.188 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j96 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 1966931 ]] 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 1966931 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.sZK1Vo 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.sZK1Vo/tests/target /tmp/spdk.sZK1Vo 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=950202368 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4334227456 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=185208930304 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=195974283264 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10765352960 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=97924960256 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=97987141632 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=39171829760 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=39194857472 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23027712 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=97984573440 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=97987141632 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=2568192 00:10:56.189 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=19597422592 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=19597426688 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:10:56.190 * Looking for test storage... 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=185208930304 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=12979945472 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:56.190 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:56.191 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:56.191 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:56.191 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.191 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:56.191 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:56.191 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:56.191 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.191 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.191 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.191 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:56.191 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:56.191 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:56.191 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:01.457 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:01.457 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:01.457 Found net devices under 0000:86:00.0: cvl_0_0 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.457 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:01.458 Found net devices under 0000:86:00.1: cvl_0_1 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.458 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.458 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:01.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:11:01.458 00:11:01.458 --- 10.0.0.2 ping statistics --- 00:11:01.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.458 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:11:01.458 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:11:01.458 00:11:01.458 --- 10.0.0.1 ping statistics --- 00:11:01.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.458 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:11:01.458 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.458 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:01.458 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:01.458 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.458 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:01.458 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:01.458 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.458 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:01.458 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:01.717 ************************************ 00:11:01.717 START TEST nvmf_filesystem_no_in_capsule 00:11:01.717 ************************************ 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1969954 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1969954 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1969954 ']' 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:01.717 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.717 [2024-07-24 19:46:53.155448] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:11:01.717 [2024-07-24 19:46:53.155487] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.717 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.717 [2024-07-24 19:46:53.215424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.717 [2024-07-24 19:46:53.290715] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.717 [2024-07-24 19:46:53.290757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.717 [2024-07-24 19:46:53.290763] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.717 [2024-07-24 19:46:53.290769] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.717 [2024-07-24 19:46:53.290774] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.717 [2024-07-24 19:46:53.290818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.717 [2024-07-24 19:46:53.290914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.717 [2024-07-24 19:46:53.290999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.717 [2024-07-24 19:46:53.291001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.648 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.648 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:02.648 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:02.648 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:02.648 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.648 [2024-07-24 19:46:54.009569] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.648 Malloc1 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.648 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.649 [2024-07-24 19:46:54.154133] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:02.649 { 00:11:02.649 "name": "Malloc1", 00:11:02.649 "aliases": [ 00:11:02.649 "cf4c774d-00c5-49ae-aa18-ad9fccf11d0b" 00:11:02.649 ], 00:11:02.649 "product_name": "Malloc disk", 00:11:02.649 "block_size": 512, 00:11:02.649 "num_blocks": 1048576, 00:11:02.649 "uuid": "cf4c774d-00c5-49ae-aa18-ad9fccf11d0b", 00:11:02.649 "assigned_rate_limits": { 00:11:02.649 "rw_ios_per_sec": 0, 00:11:02.649 "rw_mbytes_per_sec": 0, 00:11:02.649 "r_mbytes_per_sec": 0, 00:11:02.649 "w_mbytes_per_sec": 0 00:11:02.649 }, 00:11:02.649 "claimed": true, 00:11:02.649 "claim_type": "exclusive_write", 00:11:02.649 "zoned": false, 00:11:02.649 "supported_io_types": { 00:11:02.649 "read": true, 00:11:02.649 "write": true, 00:11:02.649 "unmap": true, 00:11:02.649 "flush": true, 00:11:02.649 "reset": true, 00:11:02.649 "nvme_admin": false, 00:11:02.649 "nvme_io": false, 00:11:02.649 "nvme_io_md": false, 00:11:02.649 "write_zeroes": true, 00:11:02.649 "zcopy": true, 00:11:02.649 "get_zone_info": false, 00:11:02.649 "zone_management": false, 00:11:02.649 "zone_append": false, 00:11:02.649 "compare": false, 00:11:02.649 "compare_and_write": false, 00:11:02.649 "abort": true, 00:11:02.649 "seek_hole": false, 00:11:02.649 "seek_data": false, 00:11:02.649 "copy": true, 00:11:02.649 "nvme_iov_md": false 00:11:02.649 }, 00:11:02.649 "memory_domains": [ 00:11:02.649 { 00:11:02.649 "dma_device_id": "system", 00:11:02.649 "dma_device_type": 1 00:11:02.649 }, 00:11:02.649 { 00:11:02.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.649 "dma_device_type": 2 00:11:02.649 } 00:11:02.649 ], 00:11:02.649 "driver_specific": {} 00:11:02.649 } 00:11:02.649 ]' 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:02.649 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:02.906 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:02.906 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:02.906 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:02.906 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:02.906 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:03.839 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:03.839 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:03.839 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.839 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:03.839 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:06.363 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:06.363 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:06.363 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.363 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:06.363 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.363 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:06.363 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:06.363 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:06.363 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:06.363 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:06.363 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:06.363 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:06.363 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:06.363 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:06.364 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:06.364 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:06.364 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:06.364 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:06.929 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:08.302 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:08.302 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:08.302 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:08.302 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.302 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.302 ************************************ 00:11:08.302 START TEST filesystem_ext4 00:11:08.302 ************************************ 00:11:08.302 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:08.302 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:08.302 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:08.302 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:08.302 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:08.302 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:08.302 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:08.302 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:08.302 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:08.302 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:08.302 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:08.302 mke2fs 1.46.5 (30-Dec-2021) 00:11:08.302 Discarding device blocks: 0/522240 done 00:11:08.302 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:08.302 Filesystem UUID: 9395b94f-9e2f-4184-8203-3d77d21a0124 00:11:08.302 Superblock backups stored on blocks: 00:11:08.302 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:08.302 00:11:08.302 Allocating group tables: 0/64 done 00:11:08.302 Writing inode tables: 0/64 done 00:11:10.828 Creating journal (8192 blocks): done 00:11:11.086 Writing superblocks and filesystem accounting information: 0/64 done 00:11:11.086 00:11:11.086 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:11.086 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:11.086 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:11.347 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:11.347 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:11.347 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:11.347 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:11.347 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:11.347 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1969954 00:11:11.347 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:11.348 00:11:11.348 real 0m3.303s 00:11:11.348 user 0m0.023s 00:11:11.348 sys 0m0.049s 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:11.348 ************************************ 00:11:11.348 END TEST filesystem_ext4 00:11:11.348 ************************************ 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.348 ************************************ 00:11:11.348 START TEST filesystem_btrfs 00:11:11.348 ************************************ 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:11.348 19:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:11.917 btrfs-progs v6.6.2 00:11:11.917 See https://btrfs.readthedocs.io for more information. 00:11:11.917 00:11:11.917 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:11.917 NOTE: several default settings have changed in version 5.15, please make sure 00:11:11.917 this does not affect your deployments: 00:11:11.917 - DUP for metadata (-m dup) 00:11:11.917 - enabled no-holes (-O no-holes) 00:11:11.917 - enabled free-space-tree (-R free-space-tree) 00:11:11.917 00:11:11.917 Label: (null) 00:11:11.917 UUID: b00e1927-ac3b-441f-90a6-0a1b79d03f6c 00:11:11.917 Node size: 16384 00:11:11.917 Sector size: 4096 00:11:11.917 Filesystem size: 510.00MiB 00:11:11.917 Block group profiles: 00:11:11.917 Data: single 8.00MiB 00:11:11.917 Metadata: DUP 32.00MiB 00:11:11.917 System: DUP 8.00MiB 00:11:11.917 SSD detected: yes 00:11:11.917 Zoned device: no 00:11:11.917 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:11.917 Runtime features: free-space-tree 00:11:11.917 Checksum: crc32c 00:11:11.917 Number of devices: 1 00:11:11.917 Devices: 00:11:11.917 ID SIZE PATH 00:11:11.917 1 510.00MiB /dev/nvme0n1p1 00:11:11.917 00:11:11.917 19:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:11.917 19:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1969954 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:12.904 00:11:12.904 real 0m1.347s 00:11:12.904 user 0m0.028s 00:11:12.904 sys 0m0.055s 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:12.904 ************************************ 00:11:12.904 END TEST filesystem_btrfs 00:11:12.904 ************************************ 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.904 ************************************ 00:11:12.904 START TEST filesystem_xfs 00:11:12.904 ************************************ 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:12.904 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:12.904 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:12.904 = sectsz=512 attr=2, projid32bit=1 00:11:12.904 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:12.904 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:12.904 data = bsize=4096 blocks=130560, imaxpct=25 00:11:12.904 = sunit=0 swidth=0 blks 00:11:12.904 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:12.904 log =internal log bsize=4096 blocks=16384, version=2 00:11:12.904 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:12.904 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:13.842 Discarding blocks...Done. 00:11:13.842 19:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:14.101 19:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:16.642 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:16.642 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:16.642 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:16.642 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:16.642 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:16.642 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.642 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1969954 00:11:16.642 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.642 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.642 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.642 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.642 00:11:16.642 real 0m3.827s 00:11:16.642 user 0m0.022s 00:11:16.642 sys 0m0.052s 00:11:16.642 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.642 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:16.642 ************************************ 00:11:16.642 END TEST filesystem_xfs 00:11:16.642 ************************************ 00:11:16.642 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:16.901 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:16.901 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:17.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1969954 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1969954 ']' 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1969954 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1969954 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1969954' 00:11:17.161 killing process with pid 1969954 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1969954 00:11:17.161 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1969954 00:11:17.421 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:17.421 00:11:17.421 real 0m15.863s 00:11:17.421 user 1m2.453s 00:11:17.421 sys 0m1.165s 00:11:17.421 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.421 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.421 ************************************ 00:11:17.421 END TEST nvmf_filesystem_no_in_capsule 00:11:17.421 ************************************ 00:11:17.421 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:17.421 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:17.421 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.421 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.681 ************************************ 00:11:17.681 START TEST nvmf_filesystem_in_capsule 00:11:17.681 ************************************ 00:11:17.681 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:17.681 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:17.681 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:17.681 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:17.681 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:17.681 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.681 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1972930 00:11:17.681 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1972930 00:11:17.681 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:17.681 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1972930 ']' 00:11:17.681 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.681 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:17.681 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.681 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:17.681 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.681 [2024-07-24 19:47:09.092089] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:11:17.681 [2024-07-24 19:47:09.092128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.681 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.681 [2024-07-24 19:47:09.150053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.681 [2024-07-24 19:47:09.230340] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.681 [2024-07-24 19:47:09.230381] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.681 [2024-07-24 19:47:09.230388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.681 [2024-07-24 19:47:09.230394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.681 [2024-07-24 19:47:09.230400] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.681 [2024-07-24 19:47:09.230455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.681 [2024-07-24 19:47:09.230473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.681 [2024-07-24 19:47:09.230560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.681 [2024-07-24 19:47:09.230561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.617 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:18.617 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:18.617 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:18.617 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:18.617 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.617 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.617 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:18.617 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:18.617 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.617 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.617 [2024-07-24 19:47:09.941255] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.617 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.617 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:18.617 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.617 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.617 Malloc1 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.617 [2024-07-24 19:47:10.090646] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.617 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.618 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.618 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:18.618 { 00:11:18.618 "name": "Malloc1", 00:11:18.618 "aliases": [ 00:11:18.618 "c6456b2f-1cad-4474-8b9d-78a75b26ee85" 00:11:18.618 ], 00:11:18.618 "product_name": "Malloc disk", 00:11:18.618 "block_size": 512, 00:11:18.618 "num_blocks": 1048576, 00:11:18.618 "uuid": "c6456b2f-1cad-4474-8b9d-78a75b26ee85", 00:11:18.618 "assigned_rate_limits": { 00:11:18.618 "rw_ios_per_sec": 0, 00:11:18.618 "rw_mbytes_per_sec": 0, 00:11:18.618 "r_mbytes_per_sec": 0, 00:11:18.618 "w_mbytes_per_sec": 0 00:11:18.618 }, 00:11:18.618 "claimed": true, 00:11:18.618 "claim_type": "exclusive_write", 00:11:18.618 "zoned": false, 00:11:18.618 "supported_io_types": { 00:11:18.618 "read": true, 00:11:18.618 "write": true, 00:11:18.618 "unmap": true, 00:11:18.618 "flush": true, 00:11:18.618 "reset": true, 00:11:18.618 "nvme_admin": false, 00:11:18.618 "nvme_io": false, 00:11:18.618 "nvme_io_md": false, 00:11:18.618 "write_zeroes": true, 00:11:18.618 "zcopy": true, 00:11:18.618 "get_zone_info": false, 00:11:18.618 "zone_management": false, 00:11:18.618 "zone_append": false, 00:11:18.618 "compare": false, 00:11:18.618 "compare_and_write": false, 00:11:18.618 "abort": true, 00:11:18.618 "seek_hole": false, 00:11:18.618 "seek_data": false, 00:11:18.618 "copy": true, 00:11:18.618 "nvme_iov_md": false 00:11:18.618 }, 00:11:18.618 "memory_domains": [ 00:11:18.618 { 00:11:18.618 "dma_device_id": "system", 00:11:18.618 "dma_device_type": 1 00:11:18.618 }, 00:11:18.618 { 00:11:18.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.618 "dma_device_type": 2 00:11:18.618 } 00:11:18.618 ], 00:11:18.618 "driver_specific": {} 00:11:18.618 } 00:11:18.618 ]' 00:11:18.618 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:18.618 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:18.618 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:18.618 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:18.618 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:18.618 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:18.618 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:18.618 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:19.995 19:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:19.995 19:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:19.995 19:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:19.995 19:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:19.995 19:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:21.913 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:21.913 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:21.914 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:21.914 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:21.914 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:21.914 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:21.914 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:21.914 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:21.914 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:21.914 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:21.914 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:21.914 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:21.914 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:21.914 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:21.914 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:21.914 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:21.914 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:22.177 19:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:22.744 19:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:23.684 19:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:23.684 19:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:23.684 19:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:23.684 19:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.684 19:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.684 ************************************ 00:11:23.684 START TEST filesystem_in_capsule_ext4 00:11:23.684 ************************************ 00:11:23.684 19:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:23.684 19:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:23.684 19:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:23.684 19:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:23.684 19:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:23.684 19:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:23.684 19:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:23.684 19:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:23.684 19:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:23.684 19:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:23.684 19:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:23.684 mke2fs 1.46.5 (30-Dec-2021) 00:11:23.943 Discarding device blocks: 0/522240 done 00:11:23.943 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:23.943 Filesystem UUID: 56ac27df-f1ae-47b4-8f83-926d073fa6a1 00:11:23.943 Superblock backups stored on blocks: 00:11:23.943 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:23.943 00:11:23.943 Allocating group tables: 0/64 done 00:11:23.943 Writing inode tables: 0/64 done 00:11:26.479 Creating journal (8192 blocks): done 00:11:26.479 Writing superblocks and filesystem accounting information: 0/64 done 00:11:26.479 00:11:26.479 19:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:26.479 19:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.415 19:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.415 19:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:27.415 19:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.415 19:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:27.415 19:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:27.415 19:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.674 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1972930 00:11:27.674 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.674 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.674 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.674 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.675 00:11:27.675 real 0m3.830s 00:11:27.675 user 0m0.023s 00:11:27.675 sys 0m0.048s 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:27.675 ************************************ 00:11:27.675 END TEST filesystem_in_capsule_ext4 00:11:27.675 ************************************ 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.675 ************************************ 00:11:27.675 START TEST filesystem_in_capsule_btrfs 00:11:27.675 ************************************ 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:27.675 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:27.935 btrfs-progs v6.6.2 00:11:27.935 See https://btrfs.readthedocs.io for more information. 00:11:27.935 00:11:27.935 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:27.935 NOTE: several default settings have changed in version 5.15, please make sure 00:11:27.935 this does not affect your deployments: 00:11:27.935 - DUP for metadata (-m dup) 00:11:27.935 - enabled no-holes (-O no-holes) 00:11:27.935 - enabled free-space-tree (-R free-space-tree) 00:11:27.935 00:11:27.935 Label: (null) 00:11:27.935 UUID: 671d3f4c-23a2-4324-8594-5615cc1f2f53 00:11:27.935 Node size: 16384 00:11:27.935 Sector size: 4096 00:11:27.935 Filesystem size: 510.00MiB 00:11:27.935 Block group profiles: 00:11:27.935 Data: single 8.00MiB 00:11:27.935 Metadata: DUP 32.00MiB 00:11:27.935 System: DUP 8.00MiB 00:11:27.935 SSD detected: yes 00:11:27.935 Zoned device: no 00:11:27.935 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:27.935 Runtime features: free-space-tree 00:11:27.935 Checksum: crc32c 00:11:27.935 Number of devices: 1 00:11:27.935 Devices: 00:11:27.935 ID SIZE PATH 00:11:27.935 1 510.00MiB /dev/nvme0n1p1 00:11:27.935 00:11:27.935 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:27.935 19:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:28.503 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:28.503 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:28.763 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:28.763 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:28.763 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:28.763 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:28.763 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1972930 00:11:28.763 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:28.763 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:28.763 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:28.763 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:28.763 00:11:28.763 real 0m1.042s 00:11:28.763 user 0m0.022s 00:11:28.763 sys 0m0.058s 00:11:28.763 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.763 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:28.763 ************************************ 00:11:28.763 END TEST filesystem_in_capsule_btrfs 00:11:28.763 ************************************ 00:11:28.763 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:28.763 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:28.763 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.763 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.763 ************************************ 00:11:28.764 START TEST filesystem_in_capsule_xfs 00:11:28.764 ************************************ 00:11:28.764 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:28.764 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:28.764 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:28.764 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:28.764 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:28.764 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:28.764 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:28.764 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:28.764 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:28.764 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:28.764 19:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:28.764 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:28.764 = sectsz=512 attr=2, projid32bit=1 00:11:28.764 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:28.764 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:28.764 data = bsize=4096 blocks=130560, imaxpct=25 00:11:28.764 = sunit=0 swidth=0 blks 00:11:28.764 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:28.764 log =internal log bsize=4096 blocks=16384, version=2 00:11:28.764 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:28.764 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:29.702 Discarding blocks...Done. 00:11:29.702 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:29.702 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1972930 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.241 00:11:32.241 real 0m3.368s 00:11:32.241 user 0m0.022s 00:11:32.241 sys 0m0.050s 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:32.241 ************************************ 00:11:32.241 END TEST filesystem_in_capsule_xfs 00:11:32.241 ************************************ 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1972930 00:11:32.241 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1972930 ']' 00:11:32.242 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1972930 00:11:32.242 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:32.242 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:32.242 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1972930 00:11:32.242 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:32.242 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:32.242 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1972930' 00:11:32.242 killing process with pid 1972930 00:11:32.242 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1972930 00:11:32.242 19:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1972930 00:11:32.849 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:32.849 00:11:32.849 real 0m15.127s 00:11:32.849 user 0m59.508s 00:11:32.849 sys 0m1.150s 00:11:32.849 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.850 ************************************ 00:11:32.850 END TEST nvmf_filesystem_in_capsule 00:11:32.850 ************************************ 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:32.850 rmmod nvme_tcp 00:11:32.850 rmmod nvme_fabrics 00:11:32.850 rmmod nvme_keyring 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.850 19:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.759 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:34.759 00:11:34.759 real 0m38.949s 00:11:34.759 user 2m3.543s 00:11:34.759 sys 0m6.656s 00:11:34.759 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.759 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.759 ************************************ 00:11:34.759 END TEST nvmf_filesystem 00:11:34.759 ************************************ 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.020 ************************************ 00:11:35.020 START TEST nvmf_target_discovery 00:11:35.020 ************************************ 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:35.020 * Looking for test storage... 00:11:35.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:35.020 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:35.021 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:35.021 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.021 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.021 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.021 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:35.021 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:35.021 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:35.021 19:47:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:40.299 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:40.299 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:40.299 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:40.300 Found net devices under 0000:86:00.0: cvl_0_0 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:40.300 Found net devices under 0000:86:00.1: cvl_0_1 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:40.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:11:40.300 00:11:40.300 --- 10.0.0.2 ping statistics --- 00:11:40.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.300 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:11:40.300 00:11:40.300 --- 10.0.0.1 ping statistics --- 00:11:40.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.300 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1978964 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1978964 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1978964 ']' 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.300 19:47:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.300 [2024-07-24 19:47:31.734289] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:11:40.300 [2024-07-24 19:47:31.734331] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.300 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.300 [2024-07-24 19:47:31.791639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.300 [2024-07-24 19:47:31.871619] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.300 [2024-07-24 19:47:31.871656] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.300 [2024-07-24 19:47:31.871663] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.300 [2024-07-24 19:47:31.871669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.300 [2024-07-24 19:47:31.871674] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.300 [2024-07-24 19:47:31.871712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.300 [2024-07-24 19:47:31.871811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.300 [2024-07-24 19:47:31.871895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.300 [2024-07-24 19:47:31.871896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.239 [2024-07-24 19:47:32.594476] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.239 Null1 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.239 [2024-07-24 19:47:32.639926] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.239 Null2 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.239 Null3 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.239 Null4 00:11:41.239 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:41.240 00:11:41.240 Discovery Log Number of Records 6, Generation counter 6 00:11:41.240 =====Discovery Log Entry 0====== 00:11:41.240 trtype: tcp 00:11:41.240 adrfam: ipv4 00:11:41.240 subtype: current discovery subsystem 00:11:41.240 treq: not required 00:11:41.240 portid: 0 00:11:41.240 trsvcid: 4420 00:11:41.240 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:41.240 traddr: 10.0.0.2 00:11:41.240 eflags: explicit discovery connections, duplicate discovery information 00:11:41.240 sectype: none 00:11:41.240 =====Discovery Log Entry 1====== 00:11:41.240 trtype: tcp 00:11:41.240 adrfam: ipv4 00:11:41.240 subtype: nvme subsystem 00:11:41.240 treq: not required 00:11:41.240 portid: 0 00:11:41.240 trsvcid: 4420 00:11:41.240 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:41.240 traddr: 10.0.0.2 00:11:41.240 eflags: none 00:11:41.240 sectype: none 00:11:41.240 =====Discovery Log Entry 2====== 00:11:41.240 trtype: tcp 00:11:41.240 adrfam: ipv4 00:11:41.240 subtype: nvme subsystem 00:11:41.240 treq: not required 00:11:41.240 portid: 0 00:11:41.240 trsvcid: 4420 00:11:41.240 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:41.240 traddr: 10.0.0.2 00:11:41.240 eflags: none 00:11:41.240 sectype: none 00:11:41.240 =====Discovery Log Entry 3====== 00:11:41.240 trtype: tcp 00:11:41.240 adrfam: ipv4 00:11:41.240 subtype: nvme subsystem 00:11:41.240 treq: not required 00:11:41.240 portid: 0 00:11:41.240 trsvcid: 4420 00:11:41.240 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:41.240 traddr: 10.0.0.2 00:11:41.240 eflags: none 00:11:41.240 sectype: none 00:11:41.240 =====Discovery Log Entry 4====== 00:11:41.240 trtype: tcp 00:11:41.240 adrfam: ipv4 00:11:41.240 subtype: nvme subsystem 00:11:41.240 treq: not required 00:11:41.240 portid: 0 00:11:41.240 trsvcid: 4420 00:11:41.240 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:41.240 traddr: 10.0.0.2 00:11:41.240 eflags: none 00:11:41.240 sectype: none 00:11:41.240 =====Discovery Log Entry 5====== 00:11:41.240 trtype: tcp 00:11:41.240 adrfam: ipv4 00:11:41.240 subtype: discovery subsystem referral 00:11:41.240 treq: not required 00:11:41.240 portid: 0 00:11:41.240 trsvcid: 4430 00:11:41.240 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:41.240 traddr: 10.0.0.2 00:11:41.240 eflags: none 00:11:41.240 sectype: none 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:41.240 Perform nvmf subsystem discovery via RPC 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.240 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.500 [ 00:11:41.500 { 00:11:41.500 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:41.500 "subtype": "Discovery", 00:11:41.500 "listen_addresses": [ 00:11:41.500 { 00:11:41.500 "trtype": "TCP", 00:11:41.500 "adrfam": "IPv4", 00:11:41.500 "traddr": "10.0.0.2", 00:11:41.500 "trsvcid": "4420" 00:11:41.500 } 00:11:41.500 ], 00:11:41.500 "allow_any_host": true, 00:11:41.500 "hosts": [] 00:11:41.500 }, 00:11:41.500 { 00:11:41.500 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:41.500 "subtype": "NVMe", 00:11:41.500 "listen_addresses": [ 00:11:41.500 { 00:11:41.500 "trtype": "TCP", 00:11:41.500 "adrfam": "IPv4", 00:11:41.500 "traddr": "10.0.0.2", 00:11:41.500 "trsvcid": "4420" 00:11:41.500 } 00:11:41.500 ], 00:11:41.500 "allow_any_host": true, 00:11:41.500 "hosts": [], 00:11:41.500 "serial_number": "SPDK00000000000001", 00:11:41.501 "model_number": "SPDK bdev Controller", 00:11:41.501 "max_namespaces": 32, 00:11:41.501 "min_cntlid": 1, 00:11:41.501 "max_cntlid": 65519, 00:11:41.501 "namespaces": [ 00:11:41.501 { 00:11:41.501 "nsid": 1, 00:11:41.501 "bdev_name": "Null1", 00:11:41.501 "name": "Null1", 00:11:41.501 "nguid": "798696C79970456C93AEB1AEA0B84FC4", 00:11:41.501 "uuid": "798696c7-9970-456c-93ae-b1aea0b84fc4" 00:11:41.501 } 00:11:41.501 ] 00:11:41.501 }, 00:11:41.501 { 00:11:41.501 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:41.501 "subtype": "NVMe", 00:11:41.501 "listen_addresses": [ 00:11:41.501 { 00:11:41.501 "trtype": "TCP", 00:11:41.501 "adrfam": "IPv4", 00:11:41.501 "traddr": "10.0.0.2", 00:11:41.501 "trsvcid": "4420" 00:11:41.501 } 00:11:41.501 ], 00:11:41.501 "allow_any_host": true, 00:11:41.501 "hosts": [], 00:11:41.501 "serial_number": "SPDK00000000000002", 00:11:41.501 "model_number": "SPDK bdev Controller", 00:11:41.501 "max_namespaces": 32, 00:11:41.501 "min_cntlid": 1, 00:11:41.501 "max_cntlid": 65519, 00:11:41.501 "namespaces": [ 00:11:41.501 { 00:11:41.501 "nsid": 1, 00:11:41.501 "bdev_name": "Null2", 00:11:41.501 "name": "Null2", 00:11:41.501 "nguid": "D211EBB3DC4243379A167B42CB678DF3", 00:11:41.501 "uuid": "d211ebb3-dc42-4337-9a16-7b42cb678df3" 00:11:41.501 } 00:11:41.501 ] 00:11:41.501 }, 00:11:41.501 { 00:11:41.501 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:41.501 "subtype": "NVMe", 00:11:41.501 "listen_addresses": [ 00:11:41.501 { 00:11:41.501 "trtype": "TCP", 00:11:41.501 "adrfam": "IPv4", 00:11:41.501 "traddr": "10.0.0.2", 00:11:41.501 "trsvcid": "4420" 00:11:41.501 } 00:11:41.501 ], 00:11:41.501 "allow_any_host": true, 00:11:41.501 "hosts": [], 00:11:41.501 "serial_number": "SPDK00000000000003", 00:11:41.501 "model_number": "SPDK bdev Controller", 00:11:41.501 "max_namespaces": 32, 00:11:41.501 "min_cntlid": 1, 00:11:41.501 "max_cntlid": 65519, 00:11:41.501 "namespaces": [ 00:11:41.501 { 00:11:41.501 "nsid": 1, 00:11:41.501 "bdev_name": "Null3", 00:11:41.501 "name": "Null3", 00:11:41.501 "nguid": "D523DFFE0C544EA782725946CA6B8700", 00:11:41.501 "uuid": "d523dffe-0c54-4ea7-8272-5946ca6b8700" 00:11:41.501 } 00:11:41.501 ] 00:11:41.501 }, 00:11:41.501 { 00:11:41.501 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:41.501 "subtype": "NVMe", 00:11:41.501 "listen_addresses": [ 00:11:41.501 { 00:11:41.501 "trtype": "TCP", 00:11:41.501 "adrfam": "IPv4", 00:11:41.501 "traddr": "10.0.0.2", 00:11:41.501 "trsvcid": "4420" 00:11:41.501 } 00:11:41.501 ], 00:11:41.501 "allow_any_host": true, 00:11:41.501 "hosts": [], 00:11:41.501 "serial_number": "SPDK00000000000004", 00:11:41.501 "model_number": "SPDK bdev Controller", 00:11:41.501 "max_namespaces": 32, 00:11:41.501 "min_cntlid": 1, 00:11:41.501 "max_cntlid": 65519, 00:11:41.501 "namespaces": [ 00:11:41.501 { 00:11:41.501 "nsid": 1, 00:11:41.501 "bdev_name": "Null4", 00:11:41.501 "name": "Null4", 00:11:41.501 "nguid": "16529BC3D6A4445D9B0685ED2BE9A202", 00:11:41.501 "uuid": "16529bc3-d6a4-445d-9b06-85ed2be9a202" 00:11:41.501 } 00:11:41.501 ] 00:11:41.501 } 00:11:41.501 ] 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:41.501 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:41.501 rmmod nvme_tcp 00:11:41.501 rmmod nvme_fabrics 00:11:41.501 rmmod nvme_keyring 00:11:41.501 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:41.501 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:41.501 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:41.501 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1978964 ']' 00:11:41.501 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1978964 00:11:41.502 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1978964 ']' 00:11:41.502 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1978964 00:11:41.502 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:41.502 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:41.502 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1978964 00:11:41.502 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:41.502 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:41.502 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1978964' 00:11:41.502 killing process with pid 1978964 00:11:41.502 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1978964 00:11:41.502 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1978964 00:11:41.762 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:41.763 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:41.763 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:41.763 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:41.763 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:41.763 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.763 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.763 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:44.305 00:11:44.305 real 0m8.918s 00:11:44.305 user 0m7.094s 00:11:44.305 sys 0m4.254s 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.305 ************************************ 00:11:44.305 END TEST nvmf_target_discovery 00:11:44.305 ************************************ 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:44.305 ************************************ 00:11:44.305 START TEST nvmf_referrals 00:11:44.305 ************************************ 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:44.305 * Looking for test storage... 00:11:44.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:44.305 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:11:44.306 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.588 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:49.589 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:49.589 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:49.589 Found net devices under 0000:86:00.0: cvl_0_0 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:49.589 Found net devices under 0000:86:00.1: cvl_0_1 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:49.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:11:49.589 00:11:49.589 --- 10.0.0.2 ping statistics --- 00:11:49.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.589 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:11:49.589 00:11:49.589 --- 10.0.0.1 ping statistics --- 00:11:49.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.589 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1982565 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1982565 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1982565 ']' 00:11:49.589 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.590 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:49.590 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.590 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:49.590 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:49.590 [2024-07-24 19:47:40.943426] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:11:49.590 [2024-07-24 19:47:40.943472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.590 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.590 [2024-07-24 19:47:41.004521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.590 [2024-07-24 19:47:41.085989] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.590 [2024-07-24 19:47:41.086029] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.590 [2024-07-24 19:47:41.086035] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.590 [2024-07-24 19:47:41.086046] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.590 [2024-07-24 19:47:41.086052] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.590 [2024-07-24 19:47:41.086087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.590 [2024-07-24 19:47:41.086187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.590 [2024-07-24 19:47:41.086270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.590 [2024-07-24 19:47:41.086271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.530 [2024-07-24 19:47:41.802484] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.530 [2024-07-24 19:47:41.815837] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:50.530 19:47:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:50.530 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:50.531 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:50.790 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:51.049 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:51.049 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:51.049 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:51.049 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:51.049 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:51.049 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.049 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:51.049 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:51.049 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:51.049 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:51.049 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:51.049 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.050 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.309 19:47:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:51.570 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:51.570 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:51.570 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:51.570 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:51.570 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.570 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:51.570 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:51.570 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:51.570 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.570 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.570 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.570 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:51.570 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:51.570 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.570 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.570 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:51.831 rmmod nvme_tcp 00:11:51.831 rmmod nvme_fabrics 00:11:51.831 rmmod nvme_keyring 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1982565 ']' 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1982565 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1982565 ']' 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1982565 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1982565 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1982565' 00:11:51.831 killing process with pid 1982565 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1982565 00:11:51.831 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1982565 00:11:52.090 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:52.090 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:52.090 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:52.090 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:52.090 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:52.090 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.090 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.090 19:47:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.000 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:54.000 00:11:54.000 real 0m10.190s 00:11:54.000 user 0m12.006s 00:11:54.000 sys 0m4.575s 00:11:54.000 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.000 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.000 ************************************ 00:11:54.000 END TEST nvmf_referrals 00:11:54.000 ************************************ 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:54.261 ************************************ 00:11:54.261 START TEST nvmf_connect_disconnect 00:11:54.261 ************************************ 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:54.261 * Looking for test storage... 00:11:54.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:11:54.261 19:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:59.546 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:59.546 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:59.546 Found net devices under 0000:86:00.0: cvl_0_0 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:59.546 Found net devices under 0000:86:00.1: cvl_0_1 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.546 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:59.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:11:59.546 00:11:59.546 --- 10.0.0.2 ping statistics --- 00:11:59.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.546 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:11:59.546 00:11:59.546 --- 10.0.0.1 ping statistics --- 00:11:59.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.546 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1986577 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1986577 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1986577 ']' 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:59.546 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:59.806 [2024-07-24 19:47:51.191230] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:11:59.807 [2024-07-24 19:47:51.191278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.807 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.807 [2024-07-24 19:47:51.248459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.807 [2024-07-24 19:47:51.329762] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.807 [2024-07-24 19:47:51.329798] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.807 [2024-07-24 19:47:51.329806] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.807 [2024-07-24 19:47:51.329812] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.807 [2024-07-24 19:47:51.329817] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.807 [2024-07-24 19:47:51.329859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.807 [2024-07-24 19:47:51.329957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.807 [2024-07-24 19:47:51.330022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.807 [2024-07-24 19:47:51.330024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.421 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:00.421 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:00.421 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:00.421 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:00.421 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.680 [2024-07-24 19:47:52.037535] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.680 [2024-07-24 19:47:52.089447] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:00.680 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:03.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:17.157 rmmod nvme_tcp 00:12:17.157 rmmod nvme_fabrics 00:12:17.157 rmmod nvme_keyring 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1986577 ']' 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1986577 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1986577 ']' 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1986577 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1986577 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1986577' 00:12:17.157 killing process with pid 1986577 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1986577 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1986577 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.157 19:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.070 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:19.070 00:12:19.070 real 0m24.850s 00:12:19.070 user 1m9.503s 00:12:19.070 sys 0m5.052s 00:12:19.070 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.070 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.070 ************************************ 00:12:19.070 END TEST nvmf_connect_disconnect 00:12:19.070 ************************************ 00:12:19.070 19:48:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:19.070 19:48:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:19.070 19:48:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.070 19:48:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:19.070 ************************************ 00:12:19.070 START TEST nvmf_multitarget 00:12:19.071 ************************************ 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:19.071 * Looking for test storage... 00:12:19.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.071 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:24.358 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:24.358 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:24.358 Found net devices under 0000:86:00.0: cvl_0_0 00:12:24.358 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:24.359 Found net devices under 0000:86:00.1: cvl_0_1 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:24.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:12:24.359 00:12:24.359 --- 10.0.0.2 ping statistics --- 00:12:24.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.359 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:24.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:12:24.359 00:12:24.359 --- 10.0.0.1 ping statistics --- 00:12:24.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.359 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1993254 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1993254 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1993254 ']' 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:24.359 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:24.359 [2024-07-24 19:48:15.895594] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:12:24.359 [2024-07-24 19:48:15.895640] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.359 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.359 [2024-07-24 19:48:15.954367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.620 [2024-07-24 19:48:16.035632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.620 [2024-07-24 19:48:16.035670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.620 [2024-07-24 19:48:16.035678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.620 [2024-07-24 19:48:16.035684] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.620 [2024-07-24 19:48:16.035689] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.620 [2024-07-24 19:48:16.035736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.620 [2024-07-24 19:48:16.035830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.620 [2024-07-24 19:48:16.035918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.620 [2024-07-24 19:48:16.035919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.191 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:25.191 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:25.191 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:25.191 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:25.191 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:25.191 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.191 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:25.191 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:25.191 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:25.451 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:25.452 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:25.452 "nvmf_tgt_1" 00:12:25.452 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:25.452 "nvmf_tgt_2" 00:12:25.712 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:25.712 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:25.712 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:25.712 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:25.712 true 00:12:25.712 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:25.972 true 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:25.972 rmmod nvme_tcp 00:12:25.972 rmmod nvme_fabrics 00:12:25.972 rmmod nvme_keyring 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1993254 ']' 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1993254 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1993254 ']' 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1993254 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:25.972 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1993254 00:12:26.232 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:26.232 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:26.232 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1993254' 00:12:26.232 killing process with pid 1993254 00:12:26.232 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1993254 00:12:26.232 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1993254 00:12:26.232 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:26.232 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:26.232 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:26.232 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.232 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:26.232 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.232 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.232 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:28.782 00:12:28.782 real 0m9.265s 00:12:28.782 user 0m9.003s 00:12:28.782 sys 0m4.377s 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.782 ************************************ 00:12:28.782 END TEST nvmf_multitarget 00:12:28.782 ************************************ 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:28.782 ************************************ 00:12:28.782 START TEST nvmf_rpc 00:12:28.782 ************************************ 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:28.782 * Looking for test storage... 00:12:28.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.782 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.782 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.782 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.782 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.782 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.782 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:28.783 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:34.136 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:34.136 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:34.136 Found net devices under 0000:86:00.0: cvl_0_0 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:34.136 Found net devices under 0000:86:00.1: cvl_0_1 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.136 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:34.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:12:34.137 00:12:34.137 --- 10.0.0.2 ping statistics --- 00:12:34.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.137 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:12:34.137 00:12:34.137 --- 10.0.0.1 ping statistics --- 00:12:34.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.137 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1997049 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1997049 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1997049 ']' 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:34.137 19:48:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.397 [2024-07-24 19:48:25.771522] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:12:34.397 [2024-07-24 19:48:25.771564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.397 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.397 [2024-07-24 19:48:25.830833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.397 [2024-07-24 19:48:25.917003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.397 [2024-07-24 19:48:25.917046] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.397 [2024-07-24 19:48:25.917054] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.397 [2024-07-24 19:48:25.917061] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.397 [2024-07-24 19:48:25.917066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.397 [2024-07-24 19:48:25.917117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.397 [2024-07-24 19:48:25.917236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.397 [2024-07-24 19:48:25.917315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.397 [2024-07-24 19:48:25.917316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:35.336 "tick_rate": 2300000000, 00:12:35.336 "poll_groups": [ 00:12:35.336 { 00:12:35.336 "name": "nvmf_tgt_poll_group_000", 00:12:35.336 "admin_qpairs": 0, 00:12:35.336 "io_qpairs": 0, 00:12:35.336 "current_admin_qpairs": 0, 00:12:35.336 "current_io_qpairs": 0, 00:12:35.336 "pending_bdev_io": 0, 00:12:35.336 "completed_nvme_io": 0, 00:12:35.336 "transports": [] 00:12:35.336 }, 00:12:35.336 { 00:12:35.336 "name": "nvmf_tgt_poll_group_001", 00:12:35.336 "admin_qpairs": 0, 00:12:35.336 "io_qpairs": 0, 00:12:35.336 "current_admin_qpairs": 0, 00:12:35.336 "current_io_qpairs": 0, 00:12:35.336 "pending_bdev_io": 0, 00:12:35.336 "completed_nvme_io": 0, 00:12:35.336 "transports": [] 00:12:35.336 }, 00:12:35.336 { 00:12:35.336 "name": "nvmf_tgt_poll_group_002", 00:12:35.336 "admin_qpairs": 0, 00:12:35.336 "io_qpairs": 0, 00:12:35.336 "current_admin_qpairs": 0, 00:12:35.336 "current_io_qpairs": 0, 00:12:35.336 "pending_bdev_io": 0, 00:12:35.336 "completed_nvme_io": 0, 00:12:35.336 "transports": [] 00:12:35.336 }, 00:12:35.336 { 00:12:35.336 "name": "nvmf_tgt_poll_group_003", 00:12:35.336 "admin_qpairs": 0, 00:12:35.336 "io_qpairs": 0, 00:12:35.336 "current_admin_qpairs": 0, 00:12:35.336 "current_io_qpairs": 0, 00:12:35.336 "pending_bdev_io": 0, 00:12:35.336 "completed_nvme_io": 0, 00:12:35.336 "transports": [] 00:12:35.336 } 00:12:35.336 ] 00:12:35.336 }' 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.336 [2024-07-24 19:48:26.732878] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.336 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:35.337 "tick_rate": 2300000000, 00:12:35.337 "poll_groups": [ 00:12:35.337 { 00:12:35.337 "name": "nvmf_tgt_poll_group_000", 00:12:35.337 "admin_qpairs": 0, 00:12:35.337 "io_qpairs": 0, 00:12:35.337 "current_admin_qpairs": 0, 00:12:35.337 "current_io_qpairs": 0, 00:12:35.337 "pending_bdev_io": 0, 00:12:35.337 "completed_nvme_io": 0, 00:12:35.337 "transports": [ 00:12:35.337 { 00:12:35.337 "trtype": "TCP" 00:12:35.337 } 00:12:35.337 ] 00:12:35.337 }, 00:12:35.337 { 00:12:35.337 "name": "nvmf_tgt_poll_group_001", 00:12:35.337 "admin_qpairs": 0, 00:12:35.337 "io_qpairs": 0, 00:12:35.337 "current_admin_qpairs": 0, 00:12:35.337 "current_io_qpairs": 0, 00:12:35.337 "pending_bdev_io": 0, 00:12:35.337 "completed_nvme_io": 0, 00:12:35.337 "transports": [ 00:12:35.337 { 00:12:35.337 "trtype": "TCP" 00:12:35.337 } 00:12:35.337 ] 00:12:35.337 }, 00:12:35.337 { 00:12:35.337 "name": "nvmf_tgt_poll_group_002", 00:12:35.337 "admin_qpairs": 0, 00:12:35.337 "io_qpairs": 0, 00:12:35.337 "current_admin_qpairs": 0, 00:12:35.337 "current_io_qpairs": 0, 00:12:35.337 "pending_bdev_io": 0, 00:12:35.337 "completed_nvme_io": 0, 00:12:35.337 "transports": [ 00:12:35.337 { 00:12:35.337 "trtype": "TCP" 00:12:35.337 } 00:12:35.337 ] 00:12:35.337 }, 00:12:35.337 { 00:12:35.337 "name": "nvmf_tgt_poll_group_003", 00:12:35.337 "admin_qpairs": 0, 00:12:35.337 "io_qpairs": 0, 00:12:35.337 "current_admin_qpairs": 0, 00:12:35.337 "current_io_qpairs": 0, 00:12:35.337 "pending_bdev_io": 0, 00:12:35.337 "completed_nvme_io": 0, 00:12:35.337 "transports": [ 00:12:35.337 { 00:12:35.337 "trtype": "TCP" 00:12:35.337 } 00:12:35.337 ] 00:12:35.337 } 00:12:35.337 ] 00:12:35.337 }' 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.337 Malloc1 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.337 [2024-07-24 19:48:26.905120] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:35.337 [2024-07-24 19:48:26.929930] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:35.337 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:35.337 could not add new controller: failed to write to nvme-fabrics device 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.337 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.597 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.597 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.535 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.535 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.535 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.536 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:36.536 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.077 [2024-07-24 19:48:30.296266] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:39.077 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:39.077 could not add new controller: failed to write to nvme-fabrics device 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.077 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.016 19:48:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.016 19:48:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:40.016 19:48:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.016 19:48:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:40.016 19:48:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:41.925 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:41.925 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:41.925 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.925 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:41.925 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.925 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:41.925 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.185 [2024-07-24 19:48:33.678433] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.185 19:48:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.567 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.567 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:43.567 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.567 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:43.567 19:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.476 19:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.476 [2024-07-24 19:48:37.016855] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.476 19:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.866 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.866 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:46.866 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.866 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:46.866 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.776 [2024-07-24 19:48:40.356307] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.776 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.037 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.037 19:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.976 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.976 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:49.976 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.976 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:49.976 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:52.527 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:52.527 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:52.527 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.527 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:52.527 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.527 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:52.527 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.527 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.527 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:52.527 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.528 [2024-07-24 19:48:43.690944] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.528 19:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.531 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.531 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:53.531 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.531 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:53.531 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.442 [2024-07-24 19:48:46.973274] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.442 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.825 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.825 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:56.825 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.825 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:56.825 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:58.734 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:58.734 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.735 [2024-07-24 19:48:50.272399] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.735 [2024-07-24 19:48:50.320497] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.735 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 [2024-07-24 19:48:50.372680] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 [2024-07-24 19:48:50.420816] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 [2024-07-24 19:48:50.469009] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.996 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:58.997 "tick_rate": 2300000000, 00:12:58.997 "poll_groups": [ 00:12:58.997 { 00:12:58.997 "name": "nvmf_tgt_poll_group_000", 00:12:58.997 "admin_qpairs": 2, 00:12:58.997 "io_qpairs": 168, 00:12:58.997 "current_admin_qpairs": 0, 00:12:58.997 "current_io_qpairs": 0, 00:12:58.997 "pending_bdev_io": 0, 00:12:58.997 "completed_nvme_io": 261, 00:12:58.997 "transports": [ 00:12:58.997 { 00:12:58.997 "trtype": "TCP" 00:12:58.997 } 00:12:58.997 ] 00:12:58.997 }, 00:12:58.997 { 00:12:58.997 "name": "nvmf_tgt_poll_group_001", 00:12:58.997 "admin_qpairs": 2, 00:12:58.997 "io_qpairs": 168, 00:12:58.997 "current_admin_qpairs": 0, 00:12:58.997 "current_io_qpairs": 0, 00:12:58.997 "pending_bdev_io": 0, 00:12:58.997 "completed_nvme_io": 270, 00:12:58.997 "transports": [ 00:12:58.997 { 00:12:58.997 "trtype": "TCP" 00:12:58.997 } 00:12:58.997 ] 00:12:58.997 }, 00:12:58.997 { 00:12:58.997 "name": "nvmf_tgt_poll_group_002", 00:12:58.997 "admin_qpairs": 1, 00:12:58.997 "io_qpairs": 168, 00:12:58.997 "current_admin_qpairs": 0, 00:12:58.997 "current_io_qpairs": 0, 00:12:58.997 "pending_bdev_io": 0, 00:12:58.997 "completed_nvme_io": 273, 00:12:58.997 "transports": [ 00:12:58.997 { 00:12:58.997 "trtype": "TCP" 00:12:58.997 } 00:12:58.997 ] 00:12:58.997 }, 00:12:58.997 { 00:12:58.997 "name": "nvmf_tgt_poll_group_003", 00:12:58.997 "admin_qpairs": 2, 00:12:58.997 "io_qpairs": 168, 00:12:58.997 "current_admin_qpairs": 0, 00:12:58.997 "current_io_qpairs": 0, 00:12:58.997 "pending_bdev_io": 0, 00:12:58.997 "completed_nvme_io": 218, 00:12:58.997 "transports": [ 00:12:58.997 { 00:12:58.997 "trtype": "TCP" 00:12:58.997 } 00:12:58.997 ] 00:12:58.997 } 00:12:58.997 ] 00:12:58.997 }' 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:58.997 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:59.258 rmmod nvme_tcp 00:12:59.258 rmmod nvme_fabrics 00:12:59.258 rmmod nvme_keyring 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1997049 ']' 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1997049 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1997049 ']' 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1997049 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1997049 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1997049' 00:12:59.258 killing process with pid 1997049 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1997049 00:12:59.258 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1997049 00:12:59.518 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:59.518 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:59.518 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:59.518 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.518 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:59.518 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.518 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.518 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.428 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:01.428 00:13:01.428 real 0m33.091s 00:13:01.428 user 1m41.366s 00:13:01.428 sys 0m5.788s 00:13:01.428 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:01.428 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.428 ************************************ 00:13:01.428 END TEST nvmf_rpc 00:13:01.428 ************************************ 00:13:01.428 19:48:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:01.428 19:48:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:01.428 19:48:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:01.429 19:48:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:01.689 ************************************ 00:13:01.689 START TEST nvmf_invalid 00:13:01.689 ************************************ 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:01.689 * Looking for test storage... 00:13:01.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.689 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:01.690 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:06.991 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:06.991 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.991 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:06.991 Found net devices under 0000:86:00.0: cvl_0_0 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:06.992 Found net devices under 0000:86:00.1: cvl_0_1 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.992 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.252 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.252 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.252 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:07.252 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:07.252 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:07.252 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:07.252 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:07.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:13:07.252 00:13:07.252 --- 10.0.0.2 ping statistics --- 00:13:07.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.252 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:13:07.252 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:07.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:13:07.252 00:13:07.252 --- 10.0.0.1 ping statistics --- 00:13:07.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.252 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:13:07.252 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.252 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:07.252 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2004839 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2004839 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2004839 ']' 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:07.253 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:07.253 [2024-07-24 19:48:58.796493] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:13:07.253 [2024-07-24 19:48:58.796536] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.253 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.513 [2024-07-24 19:48:58.854238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.513 [2024-07-24 19:48:58.938486] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.513 [2024-07-24 19:48:58.938518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.513 [2024-07-24 19:48:58.938525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.513 [2024-07-24 19:48:58.938532] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.513 [2024-07-24 19:48:58.938537] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.513 [2024-07-24 19:48:58.938620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.513 [2024-07-24 19:48:58.938644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.513 [2024-07-24 19:48:58.938661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.513 [2024-07-24 19:48:58.938662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.081 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:08.081 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:08.081 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:08.081 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:08.081 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:08.081 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.081 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:08.081 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25976 00:13:08.341 [2024-07-24 19:48:59.812994] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:08.341 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:08.341 { 00:13:08.341 "nqn": "nqn.2016-06.io.spdk:cnode25976", 00:13:08.341 "tgt_name": "foobar", 00:13:08.341 "method": "nvmf_create_subsystem", 00:13:08.341 "req_id": 1 00:13:08.341 } 00:13:08.341 Got JSON-RPC error response 00:13:08.341 response: 00:13:08.341 { 00:13:08.341 "code": -32603, 00:13:08.341 "message": "Unable to find target foobar" 00:13:08.341 }' 00:13:08.341 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:08.341 { 00:13:08.341 "nqn": "nqn.2016-06.io.spdk:cnode25976", 00:13:08.341 "tgt_name": "foobar", 00:13:08.341 "method": "nvmf_create_subsystem", 00:13:08.341 "req_id": 1 00:13:08.341 } 00:13:08.341 Got JSON-RPC error response 00:13:08.341 response: 00:13:08.341 { 00:13:08.341 "code": -32603, 00:13:08.341 "message": "Unable to find target foobar" 00:13:08.341 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:08.341 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:08.341 19:48:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25352 00:13:08.600 [2024-07-24 19:48:59.997640] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25352: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:08.600 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:08.600 { 00:13:08.600 "nqn": "nqn.2016-06.io.spdk:cnode25352", 00:13:08.600 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:08.600 "method": "nvmf_create_subsystem", 00:13:08.600 "req_id": 1 00:13:08.600 } 00:13:08.600 Got JSON-RPC error response 00:13:08.600 response: 00:13:08.600 { 00:13:08.600 "code": -32602, 00:13:08.600 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:08.600 }' 00:13:08.600 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:08.600 { 00:13:08.600 "nqn": "nqn.2016-06.io.spdk:cnode25352", 00:13:08.600 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:08.600 "method": "nvmf_create_subsystem", 00:13:08.600 "req_id": 1 00:13:08.600 } 00:13:08.600 Got JSON-RPC error response 00:13:08.600 response: 00:13:08.600 { 00:13:08.600 "code": -32602, 00:13:08.600 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:08.600 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:08.600 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:08.600 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5261 00:13:08.600 [2024-07-24 19:49:00.190281] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5261: invalid model number 'SPDK_Controller' 00:13:08.860 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:08.860 { 00:13:08.860 "nqn": "nqn.2016-06.io.spdk:cnode5261", 00:13:08.860 "model_number": "SPDK_Controller\u001f", 00:13:08.860 "method": "nvmf_create_subsystem", 00:13:08.860 "req_id": 1 00:13:08.860 } 00:13:08.860 Got JSON-RPC error response 00:13:08.860 response: 00:13:08.860 { 00:13:08.860 "code": -32602, 00:13:08.860 "message": "Invalid MN SPDK_Controller\u001f" 00:13:08.860 }' 00:13:08.860 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:08.860 { 00:13:08.860 "nqn": "nqn.2016-06.io.spdk:cnode5261", 00:13:08.860 "model_number": "SPDK_Controller\u001f", 00:13:08.860 "method": "nvmf_create_subsystem", 00:13:08.860 "req_id": 1 00:13:08.860 } 00:13:08.860 Got JSON-RPC error response 00:13:08.860 response: 00:13:08.860 { 00:13:08.860 "code": -32602, 00:13:08.861 "message": "Invalid MN SPDK_Controller\u001f" 00:13:08.861 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.861 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:08.862 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:08.862 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:08.862 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.862 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.862 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:08.862 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:08.862 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:08.862 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.862 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.862 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ == \- ]] 00:13:08.862 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ' xAh67EfQ/>@o.Qm]K2dT' 00:13:08.862 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ' xAh67EfQ/>@o.Qm]K2dT' nqn.2016-06.io.spdk:cnode28051 00:13:09.122 [2024-07-24 19:49:00.511357] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28051: invalid serial number ' xAh67EfQ/>@o.Qm]K2dT' 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:09.122 { 00:13:09.122 "nqn": "nqn.2016-06.io.spdk:cnode28051", 00:13:09.122 "serial_number": " xAh67EfQ/>@o.Qm]K2dT", 00:13:09.122 "method": "nvmf_create_subsystem", 00:13:09.122 "req_id": 1 00:13:09.122 } 00:13:09.122 Got JSON-RPC error response 00:13:09.122 response: 00:13:09.122 { 00:13:09.122 "code": -32602, 00:13:09.122 "message": "Invalid SN xAh67EfQ/>@o.Qm]K2dT" 00:13:09.122 }' 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:09.122 { 00:13:09.122 "nqn": "nqn.2016-06.io.spdk:cnode28051", 00:13:09.122 "serial_number": " xAh67EfQ/>@o.Qm]K2dT", 00:13:09.122 "method": "nvmf_create_subsystem", 00:13:09.122 "req_id": 1 00:13:09.122 } 00:13:09.122 Got JSON-RPC error response 00:13:09.122 response: 00:13:09.122 { 00:13:09.122 "code": -32602, 00:13:09.122 "message": "Invalid SN xAh67EfQ/>@o.Qm]K2dT" 00:13:09.122 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:09.122 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.123 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.383 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ~ == \- ]] 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '~Y"-aauPGx/hq)b=9s9kmm7]3WST_4p#0/"2m4\h]' 00:13:09.384 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '~Y"-aauPGx/hq)b=9s9kmm7]3WST_4p#0/"2m4\h]' nqn.2016-06.io.spdk:cnode26380 00:13:09.384 [2024-07-24 19:49:00.956822] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26380: invalid model number '~Y"-aauPGx/hq)b=9s9kmm7]3WST_4p#0/"2m4\h]' 00:13:09.643 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:09.643 { 00:13:09.643 "nqn": "nqn.2016-06.io.spdk:cnode26380", 00:13:09.643 "model_number": "~Y\"-aauPGx/hq)b=9s9kmm7]3WST_4p#0/\"2m4\\h]", 00:13:09.643 "method": "nvmf_create_subsystem", 00:13:09.643 "req_id": 1 00:13:09.643 } 00:13:09.643 Got JSON-RPC error response 00:13:09.643 response: 00:13:09.643 { 00:13:09.643 "code": -32602, 00:13:09.643 "message": "Invalid MN ~Y\"-aauPGx/hq)b=9s9kmm7]3WST_4p#0/\"2m4\\h]" 00:13:09.643 }' 00:13:09.643 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:09.643 { 00:13:09.643 "nqn": "nqn.2016-06.io.spdk:cnode26380", 00:13:09.643 "model_number": "~Y\"-aauPGx/hq)b=9s9kmm7]3WST_4p#0/\"2m4\\h]", 00:13:09.643 "method": "nvmf_create_subsystem", 00:13:09.643 "req_id": 1 00:13:09.643 } 00:13:09.643 Got JSON-RPC error response 00:13:09.643 response: 00:13:09.643 { 00:13:09.643 "code": -32602, 00:13:09.643 "message": "Invalid MN ~Y\"-aauPGx/hq)b=9s9kmm7]3WST_4p#0/\"2m4\\h]" 00:13:09.643 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:09.643 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:09.643 [2024-07-24 19:49:01.153520] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.643 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:09.902 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:09.902 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:09.902 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:09.902 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:09.902 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:10.162 [2024-07-24 19:49:01.536173] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:10.162 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:10.162 { 00:13:10.162 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:10.162 "listen_address": { 00:13:10.162 "trtype": "tcp", 00:13:10.162 "traddr": "", 00:13:10.162 "trsvcid": "4421" 00:13:10.162 }, 00:13:10.162 "method": "nvmf_subsystem_remove_listener", 00:13:10.162 "req_id": 1 00:13:10.162 } 00:13:10.162 Got JSON-RPC error response 00:13:10.162 response: 00:13:10.162 { 00:13:10.162 "code": -32602, 00:13:10.162 "message": "Invalid parameters" 00:13:10.162 }' 00:13:10.162 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:10.162 { 00:13:10.162 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:10.162 "listen_address": { 00:13:10.162 "trtype": "tcp", 00:13:10.162 "traddr": "", 00:13:10.162 "trsvcid": "4421" 00:13:10.162 }, 00:13:10.162 "method": "nvmf_subsystem_remove_listener", 00:13:10.162 "req_id": 1 00:13:10.162 } 00:13:10.162 Got JSON-RPC error response 00:13:10.162 response: 00:13:10.162 { 00:13:10.162 "code": -32602, 00:13:10.162 "message": "Invalid parameters" 00:13:10.162 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:10.162 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11260 -i 0 00:13:10.162 [2024-07-24 19:49:01.708693] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11260: invalid cntlid range [0-65519] 00:13:10.162 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:10.162 { 00:13:10.162 "nqn": "nqn.2016-06.io.spdk:cnode11260", 00:13:10.162 "min_cntlid": 0, 00:13:10.162 "method": "nvmf_create_subsystem", 00:13:10.162 "req_id": 1 00:13:10.162 } 00:13:10.163 Got JSON-RPC error response 00:13:10.163 response: 00:13:10.163 { 00:13:10.163 "code": -32602, 00:13:10.163 "message": "Invalid cntlid range [0-65519]" 00:13:10.163 }' 00:13:10.163 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:10.163 { 00:13:10.163 "nqn": "nqn.2016-06.io.spdk:cnode11260", 00:13:10.163 "min_cntlid": 0, 00:13:10.163 "method": "nvmf_create_subsystem", 00:13:10.163 "req_id": 1 00:13:10.163 } 00:13:10.163 Got JSON-RPC error response 00:13:10.163 response: 00:13:10.163 { 00:13:10.163 "code": -32602, 00:13:10.163 "message": "Invalid cntlid range [0-65519]" 00:13:10.163 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.163 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2103 -i 65520 00:13:10.422 [2024-07-24 19:49:01.881302] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2103: invalid cntlid range [65520-65519] 00:13:10.422 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:10.422 { 00:13:10.422 "nqn": "nqn.2016-06.io.spdk:cnode2103", 00:13:10.422 "min_cntlid": 65520, 00:13:10.422 "method": "nvmf_create_subsystem", 00:13:10.422 "req_id": 1 00:13:10.422 } 00:13:10.422 Got JSON-RPC error response 00:13:10.422 response: 00:13:10.422 { 00:13:10.422 "code": -32602, 00:13:10.422 "message": "Invalid cntlid range [65520-65519]" 00:13:10.422 }' 00:13:10.422 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:10.422 { 00:13:10.422 "nqn": "nqn.2016-06.io.spdk:cnode2103", 00:13:10.422 "min_cntlid": 65520, 00:13:10.422 "method": "nvmf_create_subsystem", 00:13:10.422 "req_id": 1 00:13:10.422 } 00:13:10.422 Got JSON-RPC error response 00:13:10.422 response: 00:13:10.422 { 00:13:10.422 "code": -32602, 00:13:10.422 "message": "Invalid cntlid range [65520-65519]" 00:13:10.422 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.422 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29281 -I 0 00:13:10.682 [2024-07-24 19:49:02.065939] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29281: invalid cntlid range [1-0] 00:13:10.682 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:10.682 { 00:13:10.682 "nqn": "nqn.2016-06.io.spdk:cnode29281", 00:13:10.682 "max_cntlid": 0, 00:13:10.682 "method": "nvmf_create_subsystem", 00:13:10.682 "req_id": 1 00:13:10.682 } 00:13:10.682 Got JSON-RPC error response 00:13:10.682 response: 00:13:10.682 { 00:13:10.682 "code": -32602, 00:13:10.682 "message": "Invalid cntlid range [1-0]" 00:13:10.682 }' 00:13:10.682 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:10.682 { 00:13:10.682 "nqn": "nqn.2016-06.io.spdk:cnode29281", 00:13:10.682 "max_cntlid": 0, 00:13:10.682 "method": "nvmf_create_subsystem", 00:13:10.682 "req_id": 1 00:13:10.682 } 00:13:10.682 Got JSON-RPC error response 00:13:10.682 response: 00:13:10.682 { 00:13:10.682 "code": -32602, 00:13:10.682 "message": "Invalid cntlid range [1-0]" 00:13:10.682 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.682 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10655 -I 65520 00:13:10.682 [2024-07-24 19:49:02.246501] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10655: invalid cntlid range [1-65520] 00:13:10.682 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:10.682 { 00:13:10.682 "nqn": "nqn.2016-06.io.spdk:cnode10655", 00:13:10.682 "max_cntlid": 65520, 00:13:10.682 "method": "nvmf_create_subsystem", 00:13:10.682 "req_id": 1 00:13:10.682 } 00:13:10.682 Got JSON-RPC error response 00:13:10.682 response: 00:13:10.682 { 00:13:10.682 "code": -32602, 00:13:10.682 "message": "Invalid cntlid range [1-65520]" 00:13:10.682 }' 00:13:10.682 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:10.682 { 00:13:10.682 "nqn": "nqn.2016-06.io.spdk:cnode10655", 00:13:10.682 "max_cntlid": 65520, 00:13:10.682 "method": "nvmf_create_subsystem", 00:13:10.682 "req_id": 1 00:13:10.682 } 00:13:10.682 Got JSON-RPC error response 00:13:10.682 response: 00:13:10.682 { 00:13:10.682 "code": -32602, 00:13:10.682 "message": "Invalid cntlid range [1-65520]" 00:13:10.682 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.682 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6745 -i 6 -I 5 00:13:10.940 [2024-07-24 19:49:02.439325] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6745: invalid cntlid range [6-5] 00:13:10.940 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:10.940 { 00:13:10.940 "nqn": "nqn.2016-06.io.spdk:cnode6745", 00:13:10.940 "min_cntlid": 6, 00:13:10.940 "max_cntlid": 5, 00:13:10.940 "method": "nvmf_create_subsystem", 00:13:10.940 "req_id": 1 00:13:10.940 } 00:13:10.940 Got JSON-RPC error response 00:13:10.940 response: 00:13:10.940 { 00:13:10.940 "code": -32602, 00:13:10.940 "message": "Invalid cntlid range [6-5]" 00:13:10.940 }' 00:13:10.940 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:10.940 { 00:13:10.940 "nqn": "nqn.2016-06.io.spdk:cnode6745", 00:13:10.940 "min_cntlid": 6, 00:13:10.940 "max_cntlid": 5, 00:13:10.940 "method": "nvmf_create_subsystem", 00:13:10.940 "req_id": 1 00:13:10.940 } 00:13:10.940 Got JSON-RPC error response 00:13:10.940 response: 00:13:10.940 { 00:13:10.940 "code": -32602, 00:13:10.940 "message": "Invalid cntlid range [6-5]" 00:13:10.940 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.940 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:11.200 { 00:13:11.200 "name": "foobar", 00:13:11.200 "method": "nvmf_delete_target", 00:13:11.200 "req_id": 1 00:13:11.200 } 00:13:11.200 Got JSON-RPC error response 00:13:11.200 response: 00:13:11.200 { 00:13:11.200 "code": -32602, 00:13:11.200 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:11.200 }' 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:11.200 { 00:13:11.200 "name": "foobar", 00:13:11.200 "method": "nvmf_delete_target", 00:13:11.200 "req_id": 1 00:13:11.200 } 00:13:11.200 Got JSON-RPC error response 00:13:11.200 response: 00:13:11.200 { 00:13:11.200 "code": -32602, 00:13:11.200 "message": "The specified target doesn't exist, cannot delete it." 00:13:11.200 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:11.200 rmmod nvme_tcp 00:13:11.200 rmmod nvme_fabrics 00:13:11.200 rmmod nvme_keyring 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2004839 ']' 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2004839 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2004839 ']' 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2004839 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2004839 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2004839' 00:13:11.200 killing process with pid 2004839 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2004839 00:13:11.200 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2004839 00:13:11.460 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:11.460 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:11.460 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:11.460 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.460 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:11.460 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.460 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.460 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.370 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:13.370 00:13:13.370 real 0m11.874s 00:13:13.370 user 0m19.513s 00:13:13.370 sys 0m5.099s 00:13:13.370 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.370 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:13.370 ************************************ 00:13:13.370 END TEST nvmf_invalid 00:13:13.370 ************************************ 00:13:13.370 19:49:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:13.370 19:49:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:13.370 19:49:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.370 19:49:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:13.630 ************************************ 00:13:13.630 START TEST nvmf_connect_stress 00:13:13.630 ************************************ 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:13.630 * Looking for test storage... 00:13:13.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.630 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:13.631 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:18.961 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:18.961 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.961 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:18.962 Found net devices under 0000:86:00.0: cvl_0_0 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:18.962 Found net devices under 0000:86:00.1: cvl_0_1 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:18.962 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:19.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:13:19.223 00:13:19.223 --- 10.0.0.2 ping statistics --- 00:13:19.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.223 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:13:19.223 00:13:19.223 --- 10.0.0.1 ping statistics --- 00:13:19.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.223 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2009012 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2009012 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2009012 ']' 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.223 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.223 [2024-07-24 19:49:10.648008] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:13:19.223 [2024-07-24 19:49:10.648054] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.223 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.223 [2024-07-24 19:49:10.705281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:19.223 [2024-07-24 19:49:10.784435] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.223 [2024-07-24 19:49:10.784477] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.223 [2024-07-24 19:49:10.784484] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.223 [2024-07-24 19:49:10.784490] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.223 [2024-07-24 19:49:10.784496] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.224 [2024-07-24 19:49:10.784532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.224 [2024-07-24 19:49:10.784620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.224 [2024-07-24 19:49:10.784621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.164 [2024-07-24 19:49:11.497646] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.164 [2024-07-24 19:49:11.533064] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.164 NULL1 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2009256 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:20.164 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.165 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.165 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.424 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.424 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:20.424 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.424 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.424 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.994 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.994 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:20.994 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.994 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.994 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.254 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.254 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:21.254 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.254 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.254 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.514 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.514 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:21.514 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.514 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.514 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.774 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.774 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:21.774 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.774 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.774 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.033 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.033 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:22.033 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.033 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.033 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.602 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.603 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:22.603 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.603 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.603 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.863 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.863 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:22.863 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.863 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.863 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.123 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.123 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:23.123 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.123 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.123 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.382 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:23.382 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.382 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.382 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.642 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.642 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:23.642 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.642 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.642 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.212 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.212 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:24.212 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.212 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.212 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.471 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.471 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:24.471 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.471 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.471 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.731 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.731 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:24.731 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.731 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.731 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.991 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.991 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:24.991 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.991 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.991 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.251 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.251 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:25.251 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.251 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.251 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.829 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.829 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:25.829 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.829 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.829 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.088 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.088 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:26.088 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.088 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.088 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.348 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.348 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:26.348 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.348 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.348 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.608 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.608 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:26.608 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.608 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.608 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.178 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.178 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:27.178 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.178 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.178 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.437 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.437 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:27.437 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.437 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.437 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.697 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.697 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:27.697 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.697 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.697 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.956 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.956 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:27.956 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.956 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.956 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.215 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.215 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:28.215 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.215 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.215 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.784 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.784 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:28.784 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.784 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.784 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.043 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.043 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:29.043 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.043 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.043 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.302 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.302 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:29.302 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.302 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.302 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.562 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.562 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:29.562 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.562 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.562 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.822 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.822 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:29.822 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.822 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.822 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.393 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2009256 00:13:30.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2009256) - No such process 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2009256 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:30.393 rmmod nvme_tcp 00:13:30.393 rmmod nvme_fabrics 00:13:30.393 rmmod nvme_keyring 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2009012 ']' 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2009012 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2009012 ']' 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2009012 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2009012 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2009012' 00:13:30.393 killing process with pid 2009012 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2009012 00:13:30.393 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2009012 00:13:30.654 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:30.654 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:30.654 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:30.654 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:30.654 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:30.654 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.654 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.654 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.563 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:32.563 00:13:32.563 real 0m19.085s 00:13:32.563 user 0m41.108s 00:13:32.563 sys 0m8.117s 00:13:32.563 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.563 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.563 ************************************ 00:13:32.563 END TEST nvmf_connect_stress 00:13:32.563 ************************************ 00:13:32.563 19:49:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:32.563 19:49:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:32.563 19:49:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.563 19:49:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:32.563 ************************************ 00:13:32.563 START TEST nvmf_fused_ordering 00:13:32.563 ************************************ 00:13:32.563 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:32.824 * Looking for test storage... 00:13:32.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.824 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:32.825 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:38.108 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:38.108 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:38.109 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:38.109 Found net devices under 0000:86:00.0: cvl_0_0 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:38.109 Found net devices under 0000:86:00.1: cvl_0_1 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:38.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:13:38.109 00:13:38.109 --- 10.0.0.2 ping statistics --- 00:13:38.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.109 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:38.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:13:38.109 00:13:38.109 --- 10.0.0.1 ping statistics --- 00:13:38.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.109 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2014401 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2014401 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2014401 ']' 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:38.109 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.109 [2024-07-24 19:49:29.682431] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:13:38.109 [2024-07-24 19:49:29.682471] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.370 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.370 [2024-07-24 19:49:29.740073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.370 [2024-07-24 19:49:29.817946] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.370 [2024-07-24 19:49:29.817982] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.370 [2024-07-24 19:49:29.817989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.370 [2024-07-24 19:49:29.817994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.370 [2024-07-24 19:49:29.817999] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.370 [2024-07-24 19:49:29.818023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.000 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.000 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:39.000 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.000 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:39.000 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.000 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.000 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:39.000 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.000 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.000 [2024-07-24 19:49:30.523927] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.000 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.000 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:39.000 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.000 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.000 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.001 [2024-07-24 19:49:30.540095] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.001 NULL1 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.001 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:39.001 [2024-07-24 19:49:30.593888] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:13:39.001 [2024-07-24 19:49:30.593933] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2014582 ] 00:13:39.260 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.198 Attached to nqn.2016-06.io.spdk:cnode1 00:13:40.198 Namespace ID: 1 size: 1GB 00:13:40.198 fused_ordering(0) 00:13:40.198 fused_ordering(1) 00:13:40.198 fused_ordering(2) 00:13:40.198 fused_ordering(3) 00:13:40.198 fused_ordering(4) 00:13:40.198 fused_ordering(5) 00:13:40.198 fused_ordering(6) 00:13:40.198 fused_ordering(7) 00:13:40.198 fused_ordering(8) 00:13:40.198 fused_ordering(9) 00:13:40.198 fused_ordering(10) 00:13:40.198 fused_ordering(11) 00:13:40.198 fused_ordering(12) 00:13:40.198 fused_ordering(13) 00:13:40.198 fused_ordering(14) 00:13:40.198 fused_ordering(15) 00:13:40.198 fused_ordering(16) 00:13:40.198 fused_ordering(17) 00:13:40.198 fused_ordering(18) 00:13:40.198 fused_ordering(19) 00:13:40.198 fused_ordering(20) 00:13:40.198 fused_ordering(21) 00:13:40.198 fused_ordering(22) 00:13:40.198 fused_ordering(23) 00:13:40.198 fused_ordering(24) 00:13:40.198 fused_ordering(25) 00:13:40.198 fused_ordering(26) 00:13:40.198 fused_ordering(27) 00:13:40.198 fused_ordering(28) 00:13:40.198 fused_ordering(29) 00:13:40.198 fused_ordering(30) 00:13:40.198 fused_ordering(31) 00:13:40.198 fused_ordering(32) 00:13:40.198 fused_ordering(33) 00:13:40.198 fused_ordering(34) 00:13:40.198 fused_ordering(35) 00:13:40.198 fused_ordering(36) 00:13:40.198 fused_ordering(37) 00:13:40.198 fused_ordering(38) 00:13:40.198 fused_ordering(39) 00:13:40.198 fused_ordering(40) 00:13:40.198 fused_ordering(41) 00:13:40.198 fused_ordering(42) 00:13:40.198 fused_ordering(43) 00:13:40.198 fused_ordering(44) 00:13:40.198 fused_ordering(45) 00:13:40.198 fused_ordering(46) 00:13:40.198 fused_ordering(47) 00:13:40.198 fused_ordering(48) 00:13:40.198 fused_ordering(49) 00:13:40.198 fused_ordering(50) 00:13:40.198 fused_ordering(51) 00:13:40.198 fused_ordering(52) 00:13:40.198 fused_ordering(53) 00:13:40.198 fused_ordering(54) 00:13:40.198 fused_ordering(55) 00:13:40.198 fused_ordering(56) 00:13:40.199 fused_ordering(57) 00:13:40.199 fused_ordering(58) 00:13:40.199 fused_ordering(59) 00:13:40.199 fused_ordering(60) 00:13:40.199 fused_ordering(61) 00:13:40.199 fused_ordering(62) 00:13:40.199 fused_ordering(63) 00:13:40.199 fused_ordering(64) 00:13:40.199 fused_ordering(65) 00:13:40.199 fused_ordering(66) 00:13:40.199 fused_ordering(67) 00:13:40.199 fused_ordering(68) 00:13:40.199 fused_ordering(69) 00:13:40.199 fused_ordering(70) 00:13:40.199 fused_ordering(71) 00:13:40.199 fused_ordering(72) 00:13:40.199 fused_ordering(73) 00:13:40.199 fused_ordering(74) 00:13:40.199 fused_ordering(75) 00:13:40.199 fused_ordering(76) 00:13:40.199 fused_ordering(77) 00:13:40.199 fused_ordering(78) 00:13:40.199 fused_ordering(79) 00:13:40.199 fused_ordering(80) 00:13:40.199 fused_ordering(81) 00:13:40.199 fused_ordering(82) 00:13:40.199 fused_ordering(83) 00:13:40.199 fused_ordering(84) 00:13:40.199 fused_ordering(85) 00:13:40.199 fused_ordering(86) 00:13:40.199 fused_ordering(87) 00:13:40.199 fused_ordering(88) 00:13:40.199 fused_ordering(89) 00:13:40.199 fused_ordering(90) 00:13:40.199 fused_ordering(91) 00:13:40.199 fused_ordering(92) 00:13:40.199 fused_ordering(93) 00:13:40.199 fused_ordering(94) 00:13:40.199 fused_ordering(95) 00:13:40.199 fused_ordering(96) 00:13:40.199 fused_ordering(97) 00:13:40.199 fused_ordering(98) 00:13:40.199 fused_ordering(99) 00:13:40.199 fused_ordering(100) 00:13:40.199 fused_ordering(101) 00:13:40.199 fused_ordering(102) 00:13:40.199 fused_ordering(103) 00:13:40.199 fused_ordering(104) 00:13:40.199 fused_ordering(105) 00:13:40.199 fused_ordering(106) 00:13:40.199 fused_ordering(107) 00:13:40.199 fused_ordering(108) 00:13:40.199 fused_ordering(109) 00:13:40.199 fused_ordering(110) 00:13:40.199 fused_ordering(111) 00:13:40.199 fused_ordering(112) 00:13:40.199 fused_ordering(113) 00:13:40.199 fused_ordering(114) 00:13:40.199 fused_ordering(115) 00:13:40.199 fused_ordering(116) 00:13:40.199 fused_ordering(117) 00:13:40.199 fused_ordering(118) 00:13:40.199 fused_ordering(119) 00:13:40.199 fused_ordering(120) 00:13:40.199 fused_ordering(121) 00:13:40.199 fused_ordering(122) 00:13:40.199 fused_ordering(123) 00:13:40.199 fused_ordering(124) 00:13:40.199 fused_ordering(125) 00:13:40.199 fused_ordering(126) 00:13:40.199 fused_ordering(127) 00:13:40.199 fused_ordering(128) 00:13:40.199 fused_ordering(129) 00:13:40.199 fused_ordering(130) 00:13:40.199 fused_ordering(131) 00:13:40.199 fused_ordering(132) 00:13:40.199 fused_ordering(133) 00:13:40.199 fused_ordering(134) 00:13:40.199 fused_ordering(135) 00:13:40.199 fused_ordering(136) 00:13:40.199 fused_ordering(137) 00:13:40.199 fused_ordering(138) 00:13:40.199 fused_ordering(139) 00:13:40.199 fused_ordering(140) 00:13:40.199 fused_ordering(141) 00:13:40.199 fused_ordering(142) 00:13:40.199 fused_ordering(143) 00:13:40.199 fused_ordering(144) 00:13:40.199 fused_ordering(145) 00:13:40.199 fused_ordering(146) 00:13:40.199 fused_ordering(147) 00:13:40.199 fused_ordering(148) 00:13:40.199 fused_ordering(149) 00:13:40.199 fused_ordering(150) 00:13:40.199 fused_ordering(151) 00:13:40.199 fused_ordering(152) 00:13:40.199 fused_ordering(153) 00:13:40.199 fused_ordering(154) 00:13:40.199 fused_ordering(155) 00:13:40.199 fused_ordering(156) 00:13:40.199 fused_ordering(157) 00:13:40.199 fused_ordering(158) 00:13:40.199 fused_ordering(159) 00:13:40.199 fused_ordering(160) 00:13:40.199 fused_ordering(161) 00:13:40.199 fused_ordering(162) 00:13:40.199 fused_ordering(163) 00:13:40.199 fused_ordering(164) 00:13:40.199 fused_ordering(165) 00:13:40.199 fused_ordering(166) 00:13:40.199 fused_ordering(167) 00:13:40.199 fused_ordering(168) 00:13:40.199 fused_ordering(169) 00:13:40.199 fused_ordering(170) 00:13:40.199 fused_ordering(171) 00:13:40.199 fused_ordering(172) 00:13:40.199 fused_ordering(173) 00:13:40.199 fused_ordering(174) 00:13:40.199 fused_ordering(175) 00:13:40.199 fused_ordering(176) 00:13:40.199 fused_ordering(177) 00:13:40.199 fused_ordering(178) 00:13:40.199 fused_ordering(179) 00:13:40.199 fused_ordering(180) 00:13:40.199 fused_ordering(181) 00:13:40.199 fused_ordering(182) 00:13:40.199 fused_ordering(183) 00:13:40.199 fused_ordering(184) 00:13:40.199 fused_ordering(185) 00:13:40.199 fused_ordering(186) 00:13:40.199 fused_ordering(187) 00:13:40.199 fused_ordering(188) 00:13:40.199 fused_ordering(189) 00:13:40.199 fused_ordering(190) 00:13:40.199 fused_ordering(191) 00:13:40.199 fused_ordering(192) 00:13:40.199 fused_ordering(193) 00:13:40.199 fused_ordering(194) 00:13:40.199 fused_ordering(195) 00:13:40.199 fused_ordering(196) 00:13:40.199 fused_ordering(197) 00:13:40.199 fused_ordering(198) 00:13:40.199 fused_ordering(199) 00:13:40.199 fused_ordering(200) 00:13:40.199 fused_ordering(201) 00:13:40.199 fused_ordering(202) 00:13:40.199 fused_ordering(203) 00:13:40.199 fused_ordering(204) 00:13:40.199 fused_ordering(205) 00:13:41.137 fused_ordering(206) 00:13:41.137 fused_ordering(207) 00:13:41.137 fused_ordering(208) 00:13:41.137 fused_ordering(209) 00:13:41.137 fused_ordering(210) 00:13:41.137 fused_ordering(211) 00:13:41.137 fused_ordering(212) 00:13:41.137 fused_ordering(213) 00:13:41.137 fused_ordering(214) 00:13:41.137 fused_ordering(215) 00:13:41.137 fused_ordering(216) 00:13:41.137 fused_ordering(217) 00:13:41.137 fused_ordering(218) 00:13:41.137 fused_ordering(219) 00:13:41.137 fused_ordering(220) 00:13:41.137 fused_ordering(221) 00:13:41.137 fused_ordering(222) 00:13:41.137 fused_ordering(223) 00:13:41.137 fused_ordering(224) 00:13:41.137 fused_ordering(225) 00:13:41.137 fused_ordering(226) 00:13:41.137 fused_ordering(227) 00:13:41.137 fused_ordering(228) 00:13:41.137 fused_ordering(229) 00:13:41.137 fused_ordering(230) 00:13:41.137 fused_ordering(231) 00:13:41.137 fused_ordering(232) 00:13:41.137 fused_ordering(233) 00:13:41.137 fused_ordering(234) 00:13:41.137 fused_ordering(235) 00:13:41.137 fused_ordering(236) 00:13:41.137 fused_ordering(237) 00:13:41.137 fused_ordering(238) 00:13:41.137 fused_ordering(239) 00:13:41.137 fused_ordering(240) 00:13:41.137 fused_ordering(241) 00:13:41.137 fused_ordering(242) 00:13:41.137 fused_ordering(243) 00:13:41.137 fused_ordering(244) 00:13:41.137 fused_ordering(245) 00:13:41.137 fused_ordering(246) 00:13:41.137 fused_ordering(247) 00:13:41.137 fused_ordering(248) 00:13:41.137 fused_ordering(249) 00:13:41.137 fused_ordering(250) 00:13:41.137 fused_ordering(251) 00:13:41.137 fused_ordering(252) 00:13:41.137 fused_ordering(253) 00:13:41.137 fused_ordering(254) 00:13:41.137 fused_ordering(255) 00:13:41.137 fused_ordering(256) 00:13:41.137 fused_ordering(257) 00:13:41.137 fused_ordering(258) 00:13:41.137 fused_ordering(259) 00:13:41.137 fused_ordering(260) 00:13:41.137 fused_ordering(261) 00:13:41.137 fused_ordering(262) 00:13:41.137 fused_ordering(263) 00:13:41.137 fused_ordering(264) 00:13:41.137 fused_ordering(265) 00:13:41.137 fused_ordering(266) 00:13:41.137 fused_ordering(267) 00:13:41.137 fused_ordering(268) 00:13:41.137 fused_ordering(269) 00:13:41.137 fused_ordering(270) 00:13:41.137 fused_ordering(271) 00:13:41.137 fused_ordering(272) 00:13:41.137 fused_ordering(273) 00:13:41.137 fused_ordering(274) 00:13:41.137 fused_ordering(275) 00:13:41.137 fused_ordering(276) 00:13:41.137 fused_ordering(277) 00:13:41.137 fused_ordering(278) 00:13:41.137 fused_ordering(279) 00:13:41.137 fused_ordering(280) 00:13:41.137 fused_ordering(281) 00:13:41.137 fused_ordering(282) 00:13:41.137 fused_ordering(283) 00:13:41.137 fused_ordering(284) 00:13:41.137 fused_ordering(285) 00:13:41.137 fused_ordering(286) 00:13:41.137 fused_ordering(287) 00:13:41.137 fused_ordering(288) 00:13:41.137 fused_ordering(289) 00:13:41.137 fused_ordering(290) 00:13:41.137 fused_ordering(291) 00:13:41.137 fused_ordering(292) 00:13:41.137 fused_ordering(293) 00:13:41.137 fused_ordering(294) 00:13:41.137 fused_ordering(295) 00:13:41.137 fused_ordering(296) 00:13:41.137 fused_ordering(297) 00:13:41.137 fused_ordering(298) 00:13:41.137 fused_ordering(299) 00:13:41.137 fused_ordering(300) 00:13:41.137 fused_ordering(301) 00:13:41.137 fused_ordering(302) 00:13:41.137 fused_ordering(303) 00:13:41.137 fused_ordering(304) 00:13:41.137 fused_ordering(305) 00:13:41.137 fused_ordering(306) 00:13:41.137 fused_ordering(307) 00:13:41.137 fused_ordering(308) 00:13:41.137 fused_ordering(309) 00:13:41.137 fused_ordering(310) 00:13:41.138 fused_ordering(311) 00:13:41.138 fused_ordering(312) 00:13:41.138 fused_ordering(313) 00:13:41.138 fused_ordering(314) 00:13:41.138 fused_ordering(315) 00:13:41.138 fused_ordering(316) 00:13:41.138 fused_ordering(317) 00:13:41.138 fused_ordering(318) 00:13:41.138 fused_ordering(319) 00:13:41.138 fused_ordering(320) 00:13:41.138 fused_ordering(321) 00:13:41.138 fused_ordering(322) 00:13:41.138 fused_ordering(323) 00:13:41.138 fused_ordering(324) 00:13:41.138 fused_ordering(325) 00:13:41.138 fused_ordering(326) 00:13:41.138 fused_ordering(327) 00:13:41.138 fused_ordering(328) 00:13:41.138 fused_ordering(329) 00:13:41.138 fused_ordering(330) 00:13:41.138 fused_ordering(331) 00:13:41.138 fused_ordering(332) 00:13:41.138 fused_ordering(333) 00:13:41.138 fused_ordering(334) 00:13:41.138 fused_ordering(335) 00:13:41.138 fused_ordering(336) 00:13:41.138 fused_ordering(337) 00:13:41.138 fused_ordering(338) 00:13:41.138 fused_ordering(339) 00:13:41.138 fused_ordering(340) 00:13:41.138 fused_ordering(341) 00:13:41.138 fused_ordering(342) 00:13:41.138 fused_ordering(343) 00:13:41.138 fused_ordering(344) 00:13:41.138 fused_ordering(345) 00:13:41.138 fused_ordering(346) 00:13:41.138 fused_ordering(347) 00:13:41.138 fused_ordering(348) 00:13:41.138 fused_ordering(349) 00:13:41.138 fused_ordering(350) 00:13:41.138 fused_ordering(351) 00:13:41.138 fused_ordering(352) 00:13:41.138 fused_ordering(353) 00:13:41.138 fused_ordering(354) 00:13:41.138 fused_ordering(355) 00:13:41.138 fused_ordering(356) 00:13:41.138 fused_ordering(357) 00:13:41.138 fused_ordering(358) 00:13:41.138 fused_ordering(359) 00:13:41.138 fused_ordering(360) 00:13:41.138 fused_ordering(361) 00:13:41.138 fused_ordering(362) 00:13:41.138 fused_ordering(363) 00:13:41.138 fused_ordering(364) 00:13:41.138 fused_ordering(365) 00:13:41.138 fused_ordering(366) 00:13:41.138 fused_ordering(367) 00:13:41.138 fused_ordering(368) 00:13:41.138 fused_ordering(369) 00:13:41.138 fused_ordering(370) 00:13:41.138 fused_ordering(371) 00:13:41.138 fused_ordering(372) 00:13:41.138 fused_ordering(373) 00:13:41.138 fused_ordering(374) 00:13:41.138 fused_ordering(375) 00:13:41.138 fused_ordering(376) 00:13:41.138 fused_ordering(377) 00:13:41.138 fused_ordering(378) 00:13:41.138 fused_ordering(379) 00:13:41.138 fused_ordering(380) 00:13:41.138 fused_ordering(381) 00:13:41.138 fused_ordering(382) 00:13:41.138 fused_ordering(383) 00:13:41.138 fused_ordering(384) 00:13:41.138 fused_ordering(385) 00:13:41.138 fused_ordering(386) 00:13:41.138 fused_ordering(387) 00:13:41.138 fused_ordering(388) 00:13:41.138 fused_ordering(389) 00:13:41.138 fused_ordering(390) 00:13:41.138 fused_ordering(391) 00:13:41.138 fused_ordering(392) 00:13:41.138 fused_ordering(393) 00:13:41.138 fused_ordering(394) 00:13:41.138 fused_ordering(395) 00:13:41.138 fused_ordering(396) 00:13:41.138 fused_ordering(397) 00:13:41.138 fused_ordering(398) 00:13:41.138 fused_ordering(399) 00:13:41.138 fused_ordering(400) 00:13:41.138 fused_ordering(401) 00:13:41.138 fused_ordering(402) 00:13:41.138 fused_ordering(403) 00:13:41.138 fused_ordering(404) 00:13:41.138 fused_ordering(405) 00:13:41.138 fused_ordering(406) 00:13:41.138 fused_ordering(407) 00:13:41.138 fused_ordering(408) 00:13:41.138 fused_ordering(409) 00:13:41.138 fused_ordering(410) 00:13:42.074 fused_ordering(411) 00:13:42.074 fused_ordering(412) 00:13:42.074 fused_ordering(413) 00:13:42.074 fused_ordering(414) 00:13:42.074 fused_ordering(415) 00:13:42.074 fused_ordering(416) 00:13:42.074 fused_ordering(417) 00:13:42.074 fused_ordering(418) 00:13:42.074 fused_ordering(419) 00:13:42.074 fused_ordering(420) 00:13:42.074 fused_ordering(421) 00:13:42.074 fused_ordering(422) 00:13:42.074 fused_ordering(423) 00:13:42.074 fused_ordering(424) 00:13:42.074 fused_ordering(425) 00:13:42.074 fused_ordering(426) 00:13:42.075 fused_ordering(427) 00:13:42.075 fused_ordering(428) 00:13:42.075 fused_ordering(429) 00:13:42.075 fused_ordering(430) 00:13:42.075 fused_ordering(431) 00:13:42.075 fused_ordering(432) 00:13:42.075 fused_ordering(433) 00:13:42.075 fused_ordering(434) 00:13:42.075 fused_ordering(435) 00:13:42.075 fused_ordering(436) 00:13:42.075 fused_ordering(437) 00:13:42.075 fused_ordering(438) 00:13:42.075 fused_ordering(439) 00:13:42.075 fused_ordering(440) 00:13:42.075 fused_ordering(441) 00:13:42.075 fused_ordering(442) 00:13:42.075 fused_ordering(443) 00:13:42.075 fused_ordering(444) 00:13:42.075 fused_ordering(445) 00:13:42.075 fused_ordering(446) 00:13:42.075 fused_ordering(447) 00:13:42.075 fused_ordering(448) 00:13:42.075 fused_ordering(449) 00:13:42.075 fused_ordering(450) 00:13:42.075 fused_ordering(451) 00:13:42.075 fused_ordering(452) 00:13:42.075 fused_ordering(453) 00:13:42.075 fused_ordering(454) 00:13:42.075 fused_ordering(455) 00:13:42.075 fused_ordering(456) 00:13:42.075 fused_ordering(457) 00:13:42.075 fused_ordering(458) 00:13:42.075 fused_ordering(459) 00:13:42.075 fused_ordering(460) 00:13:42.075 fused_ordering(461) 00:13:42.075 fused_ordering(462) 00:13:42.075 fused_ordering(463) 00:13:42.075 fused_ordering(464) 00:13:42.075 fused_ordering(465) 00:13:42.075 fused_ordering(466) 00:13:42.075 fused_ordering(467) 00:13:42.075 fused_ordering(468) 00:13:42.075 fused_ordering(469) 00:13:42.075 fused_ordering(470) 00:13:42.075 fused_ordering(471) 00:13:42.075 fused_ordering(472) 00:13:42.075 fused_ordering(473) 00:13:42.075 fused_ordering(474) 00:13:42.075 fused_ordering(475) 00:13:42.075 fused_ordering(476) 00:13:42.075 fused_ordering(477) 00:13:42.075 fused_ordering(478) 00:13:42.075 fused_ordering(479) 00:13:42.075 fused_ordering(480) 00:13:42.075 fused_ordering(481) 00:13:42.075 fused_ordering(482) 00:13:42.075 fused_ordering(483) 00:13:42.075 fused_ordering(484) 00:13:42.075 fused_ordering(485) 00:13:42.075 fused_ordering(486) 00:13:42.075 fused_ordering(487) 00:13:42.075 fused_ordering(488) 00:13:42.075 fused_ordering(489) 00:13:42.075 fused_ordering(490) 00:13:42.075 fused_ordering(491) 00:13:42.075 fused_ordering(492) 00:13:42.075 fused_ordering(493) 00:13:42.075 fused_ordering(494) 00:13:42.075 fused_ordering(495) 00:13:42.075 fused_ordering(496) 00:13:42.075 fused_ordering(497) 00:13:42.075 fused_ordering(498) 00:13:42.075 fused_ordering(499) 00:13:42.075 fused_ordering(500) 00:13:42.075 fused_ordering(501) 00:13:42.075 fused_ordering(502) 00:13:42.075 fused_ordering(503) 00:13:42.075 fused_ordering(504) 00:13:42.075 fused_ordering(505) 00:13:42.075 fused_ordering(506) 00:13:42.075 fused_ordering(507) 00:13:42.075 fused_ordering(508) 00:13:42.075 fused_ordering(509) 00:13:42.075 fused_ordering(510) 00:13:42.075 fused_ordering(511) 00:13:42.075 fused_ordering(512) 00:13:42.075 fused_ordering(513) 00:13:42.075 fused_ordering(514) 00:13:42.075 fused_ordering(515) 00:13:42.075 fused_ordering(516) 00:13:42.075 fused_ordering(517) 00:13:42.075 fused_ordering(518) 00:13:42.075 fused_ordering(519) 00:13:42.075 fused_ordering(520) 00:13:42.075 fused_ordering(521) 00:13:42.075 fused_ordering(522) 00:13:42.075 fused_ordering(523) 00:13:42.075 fused_ordering(524) 00:13:42.075 fused_ordering(525) 00:13:42.075 fused_ordering(526) 00:13:42.075 fused_ordering(527) 00:13:42.075 fused_ordering(528) 00:13:42.075 fused_ordering(529) 00:13:42.075 fused_ordering(530) 00:13:42.075 fused_ordering(531) 00:13:42.075 fused_ordering(532) 00:13:42.075 fused_ordering(533) 00:13:42.075 fused_ordering(534) 00:13:42.075 fused_ordering(535) 00:13:42.075 fused_ordering(536) 00:13:42.075 fused_ordering(537) 00:13:42.075 fused_ordering(538) 00:13:42.075 fused_ordering(539) 00:13:42.075 fused_ordering(540) 00:13:42.075 fused_ordering(541) 00:13:42.075 fused_ordering(542) 00:13:42.075 fused_ordering(543) 00:13:42.075 fused_ordering(544) 00:13:42.075 fused_ordering(545) 00:13:42.075 fused_ordering(546) 00:13:42.075 fused_ordering(547) 00:13:42.075 fused_ordering(548) 00:13:42.075 fused_ordering(549) 00:13:42.075 fused_ordering(550) 00:13:42.075 fused_ordering(551) 00:13:42.075 fused_ordering(552) 00:13:42.075 fused_ordering(553) 00:13:42.075 fused_ordering(554) 00:13:42.075 fused_ordering(555) 00:13:42.075 fused_ordering(556) 00:13:42.075 fused_ordering(557) 00:13:42.075 fused_ordering(558) 00:13:42.075 fused_ordering(559) 00:13:42.075 fused_ordering(560) 00:13:42.075 fused_ordering(561) 00:13:42.075 fused_ordering(562) 00:13:42.075 fused_ordering(563) 00:13:42.075 fused_ordering(564) 00:13:42.075 fused_ordering(565) 00:13:42.075 fused_ordering(566) 00:13:42.075 fused_ordering(567) 00:13:42.075 fused_ordering(568) 00:13:42.075 fused_ordering(569) 00:13:42.075 fused_ordering(570) 00:13:42.075 fused_ordering(571) 00:13:42.075 fused_ordering(572) 00:13:42.075 fused_ordering(573) 00:13:42.075 fused_ordering(574) 00:13:42.075 fused_ordering(575) 00:13:42.075 fused_ordering(576) 00:13:42.075 fused_ordering(577) 00:13:42.075 fused_ordering(578) 00:13:42.075 fused_ordering(579) 00:13:42.075 fused_ordering(580) 00:13:42.075 fused_ordering(581) 00:13:42.075 fused_ordering(582) 00:13:42.075 fused_ordering(583) 00:13:42.075 fused_ordering(584) 00:13:42.075 fused_ordering(585) 00:13:42.075 fused_ordering(586) 00:13:42.075 fused_ordering(587) 00:13:42.075 fused_ordering(588) 00:13:42.075 fused_ordering(589) 00:13:42.075 fused_ordering(590) 00:13:42.075 fused_ordering(591) 00:13:42.075 fused_ordering(592) 00:13:42.075 fused_ordering(593) 00:13:42.075 fused_ordering(594) 00:13:42.075 fused_ordering(595) 00:13:42.075 fused_ordering(596) 00:13:42.075 fused_ordering(597) 00:13:42.075 fused_ordering(598) 00:13:42.075 fused_ordering(599) 00:13:42.075 fused_ordering(600) 00:13:42.075 fused_ordering(601) 00:13:42.075 fused_ordering(602) 00:13:42.075 fused_ordering(603) 00:13:42.075 fused_ordering(604) 00:13:42.075 fused_ordering(605) 00:13:42.075 fused_ordering(606) 00:13:42.075 fused_ordering(607) 00:13:42.075 fused_ordering(608) 00:13:42.075 fused_ordering(609) 00:13:42.075 fused_ordering(610) 00:13:42.075 fused_ordering(611) 00:13:42.075 fused_ordering(612) 00:13:42.075 fused_ordering(613) 00:13:42.075 fused_ordering(614) 00:13:42.076 fused_ordering(615) 00:13:43.017 fused_ordering(616) 00:13:43.017 fused_ordering(617) 00:13:43.017 fused_ordering(618) 00:13:43.017 fused_ordering(619) 00:13:43.017 fused_ordering(620) 00:13:43.017 fused_ordering(621) 00:13:43.017 fused_ordering(622) 00:13:43.017 fused_ordering(623) 00:13:43.017 fused_ordering(624) 00:13:43.017 fused_ordering(625) 00:13:43.017 fused_ordering(626) 00:13:43.017 fused_ordering(627) 00:13:43.017 fused_ordering(628) 00:13:43.017 fused_ordering(629) 00:13:43.017 fused_ordering(630) 00:13:43.017 fused_ordering(631) 00:13:43.017 fused_ordering(632) 00:13:43.017 fused_ordering(633) 00:13:43.017 fused_ordering(634) 00:13:43.017 fused_ordering(635) 00:13:43.017 fused_ordering(636) 00:13:43.017 fused_ordering(637) 00:13:43.017 fused_ordering(638) 00:13:43.017 fused_ordering(639) 00:13:43.017 fused_ordering(640) 00:13:43.017 fused_ordering(641) 00:13:43.017 fused_ordering(642) 00:13:43.017 fused_ordering(643) 00:13:43.017 fused_ordering(644) 00:13:43.017 fused_ordering(645) 00:13:43.017 fused_ordering(646) 00:13:43.017 fused_ordering(647) 00:13:43.017 fused_ordering(648) 00:13:43.017 fused_ordering(649) 00:13:43.017 fused_ordering(650) 00:13:43.017 fused_ordering(651) 00:13:43.017 fused_ordering(652) 00:13:43.017 fused_ordering(653) 00:13:43.017 fused_ordering(654) 00:13:43.017 fused_ordering(655) 00:13:43.017 fused_ordering(656) 00:13:43.017 fused_ordering(657) 00:13:43.017 fused_ordering(658) 00:13:43.017 fused_ordering(659) 00:13:43.017 fused_ordering(660) 00:13:43.017 fused_ordering(661) 00:13:43.017 fused_ordering(662) 00:13:43.017 fused_ordering(663) 00:13:43.017 fused_ordering(664) 00:13:43.017 fused_ordering(665) 00:13:43.017 fused_ordering(666) 00:13:43.017 fused_ordering(667) 00:13:43.017 fused_ordering(668) 00:13:43.017 fused_ordering(669) 00:13:43.017 fused_ordering(670) 00:13:43.017 fused_ordering(671) 00:13:43.017 fused_ordering(672) 00:13:43.017 fused_ordering(673) 00:13:43.017 fused_ordering(674) 00:13:43.017 fused_ordering(675) 00:13:43.017 fused_ordering(676) 00:13:43.017 fused_ordering(677) 00:13:43.017 fused_ordering(678) 00:13:43.017 fused_ordering(679) 00:13:43.017 fused_ordering(680) 00:13:43.017 fused_ordering(681) 00:13:43.017 fused_ordering(682) 00:13:43.017 fused_ordering(683) 00:13:43.017 fused_ordering(684) 00:13:43.017 fused_ordering(685) 00:13:43.017 fused_ordering(686) 00:13:43.017 fused_ordering(687) 00:13:43.017 fused_ordering(688) 00:13:43.017 fused_ordering(689) 00:13:43.017 fused_ordering(690) 00:13:43.017 fused_ordering(691) 00:13:43.017 fused_ordering(692) 00:13:43.017 fused_ordering(693) 00:13:43.017 fused_ordering(694) 00:13:43.017 fused_ordering(695) 00:13:43.017 fused_ordering(696) 00:13:43.017 fused_ordering(697) 00:13:43.017 fused_ordering(698) 00:13:43.017 fused_ordering(699) 00:13:43.017 fused_ordering(700) 00:13:43.017 fused_ordering(701) 00:13:43.017 fused_ordering(702) 00:13:43.017 fused_ordering(703) 00:13:43.017 fused_ordering(704) 00:13:43.017 fused_ordering(705) 00:13:43.017 fused_ordering(706) 00:13:43.017 fused_ordering(707) 00:13:43.017 fused_ordering(708) 00:13:43.017 fused_ordering(709) 00:13:43.017 fused_ordering(710) 00:13:43.017 fused_ordering(711) 00:13:43.017 fused_ordering(712) 00:13:43.017 fused_ordering(713) 00:13:43.017 fused_ordering(714) 00:13:43.017 fused_ordering(715) 00:13:43.017 fused_ordering(716) 00:13:43.018 fused_ordering(717) 00:13:43.018 fused_ordering(718) 00:13:43.018 fused_ordering(719) 00:13:43.018 fused_ordering(720) 00:13:43.018 fused_ordering(721) 00:13:43.018 fused_ordering(722) 00:13:43.018 fused_ordering(723) 00:13:43.018 fused_ordering(724) 00:13:43.018 fused_ordering(725) 00:13:43.018 fused_ordering(726) 00:13:43.018 fused_ordering(727) 00:13:43.018 fused_ordering(728) 00:13:43.018 fused_ordering(729) 00:13:43.018 fused_ordering(730) 00:13:43.018 fused_ordering(731) 00:13:43.018 fused_ordering(732) 00:13:43.018 fused_ordering(733) 00:13:43.018 fused_ordering(734) 00:13:43.018 fused_ordering(735) 00:13:43.018 fused_ordering(736) 00:13:43.018 fused_ordering(737) 00:13:43.018 fused_ordering(738) 00:13:43.018 fused_ordering(739) 00:13:43.018 fused_ordering(740) 00:13:43.018 fused_ordering(741) 00:13:43.018 fused_ordering(742) 00:13:43.018 fused_ordering(743) 00:13:43.018 fused_ordering(744) 00:13:43.018 fused_ordering(745) 00:13:43.018 fused_ordering(746) 00:13:43.018 fused_ordering(747) 00:13:43.018 fused_ordering(748) 00:13:43.018 fused_ordering(749) 00:13:43.018 fused_ordering(750) 00:13:43.018 fused_ordering(751) 00:13:43.018 fused_ordering(752) 00:13:43.018 fused_ordering(753) 00:13:43.018 fused_ordering(754) 00:13:43.018 fused_ordering(755) 00:13:43.018 fused_ordering(756) 00:13:43.018 fused_ordering(757) 00:13:43.018 fused_ordering(758) 00:13:43.018 fused_ordering(759) 00:13:43.018 fused_ordering(760) 00:13:43.018 fused_ordering(761) 00:13:43.018 fused_ordering(762) 00:13:43.018 fused_ordering(763) 00:13:43.018 fused_ordering(764) 00:13:43.018 fused_ordering(765) 00:13:43.018 fused_ordering(766) 00:13:43.018 fused_ordering(767) 00:13:43.018 fused_ordering(768) 00:13:43.018 fused_ordering(769) 00:13:43.018 fused_ordering(770) 00:13:43.018 fused_ordering(771) 00:13:43.018 fused_ordering(772) 00:13:43.018 fused_ordering(773) 00:13:43.018 fused_ordering(774) 00:13:43.018 fused_ordering(775) 00:13:43.018 fused_ordering(776) 00:13:43.018 fused_ordering(777) 00:13:43.018 fused_ordering(778) 00:13:43.018 fused_ordering(779) 00:13:43.018 fused_ordering(780) 00:13:43.018 fused_ordering(781) 00:13:43.018 fused_ordering(782) 00:13:43.018 fused_ordering(783) 00:13:43.018 fused_ordering(784) 00:13:43.018 fused_ordering(785) 00:13:43.018 fused_ordering(786) 00:13:43.018 fused_ordering(787) 00:13:43.018 fused_ordering(788) 00:13:43.018 fused_ordering(789) 00:13:43.018 fused_ordering(790) 00:13:43.018 fused_ordering(791) 00:13:43.018 fused_ordering(792) 00:13:43.018 fused_ordering(793) 00:13:43.018 fused_ordering(794) 00:13:43.018 fused_ordering(795) 00:13:43.018 fused_ordering(796) 00:13:43.018 fused_ordering(797) 00:13:43.018 fused_ordering(798) 00:13:43.018 fused_ordering(799) 00:13:43.018 fused_ordering(800) 00:13:43.018 fused_ordering(801) 00:13:43.018 fused_ordering(802) 00:13:43.018 fused_ordering(803) 00:13:43.018 fused_ordering(804) 00:13:43.018 fused_ordering(805) 00:13:43.018 fused_ordering(806) 00:13:43.018 fused_ordering(807) 00:13:43.018 fused_ordering(808) 00:13:43.018 fused_ordering(809) 00:13:43.018 fused_ordering(810) 00:13:43.018 fused_ordering(811) 00:13:43.018 fused_ordering(812) 00:13:43.018 fused_ordering(813) 00:13:43.018 fused_ordering(814) 00:13:43.018 fused_ordering(815) 00:13:43.018 fused_ordering(816) 00:13:43.018 fused_ordering(817) 00:13:43.018 fused_ordering(818) 00:13:43.018 fused_ordering(819) 00:13:43.018 fused_ordering(820) 00:13:43.958 fused_ordering(821) 00:13:43.958 fused_ordering(822) 00:13:43.958 fused_ordering(823) 00:13:43.958 fused_ordering(824) 00:13:43.958 fused_ordering(825) 00:13:43.958 fused_ordering(826) 00:13:43.958 fused_ordering(827) 00:13:43.958 fused_ordering(828) 00:13:43.958 fused_ordering(829) 00:13:43.958 fused_ordering(830) 00:13:43.958 fused_ordering(831) 00:13:43.958 fused_ordering(832) 00:13:43.958 fused_ordering(833) 00:13:43.958 fused_ordering(834) 00:13:43.958 fused_ordering(835) 00:13:43.958 fused_ordering(836) 00:13:43.958 fused_ordering(837) 00:13:43.958 fused_ordering(838) 00:13:43.958 fused_ordering(839) 00:13:43.958 fused_ordering(840) 00:13:43.958 fused_ordering(841) 00:13:43.958 fused_ordering(842) 00:13:43.958 fused_ordering(843) 00:13:43.958 fused_ordering(844) 00:13:43.958 fused_ordering(845) 00:13:43.958 fused_ordering(846) 00:13:43.958 fused_ordering(847) 00:13:43.958 fused_ordering(848) 00:13:43.958 fused_ordering(849) 00:13:43.958 fused_ordering(850) 00:13:43.958 fused_ordering(851) 00:13:43.958 fused_ordering(852) 00:13:43.958 fused_ordering(853) 00:13:43.958 fused_ordering(854) 00:13:43.958 fused_ordering(855) 00:13:43.958 fused_ordering(856) 00:13:43.958 fused_ordering(857) 00:13:43.958 fused_ordering(858) 00:13:43.958 fused_ordering(859) 00:13:43.958 fused_ordering(860) 00:13:43.958 fused_ordering(861) 00:13:43.958 fused_ordering(862) 00:13:43.958 fused_ordering(863) 00:13:43.958 fused_ordering(864) 00:13:43.958 fused_ordering(865) 00:13:43.958 fused_ordering(866) 00:13:43.958 fused_ordering(867) 00:13:43.958 fused_ordering(868) 00:13:43.958 fused_ordering(869) 00:13:43.958 fused_ordering(870) 00:13:43.958 fused_ordering(871) 00:13:43.958 fused_ordering(872) 00:13:43.958 fused_ordering(873) 00:13:43.958 fused_ordering(874) 00:13:43.958 fused_ordering(875) 00:13:43.958 fused_ordering(876) 00:13:43.958 fused_ordering(877) 00:13:43.958 fused_ordering(878) 00:13:43.958 fused_ordering(879) 00:13:43.958 fused_ordering(880) 00:13:43.958 fused_ordering(881) 00:13:43.958 fused_ordering(882) 00:13:43.958 fused_ordering(883) 00:13:43.958 fused_ordering(884) 00:13:43.958 fused_ordering(885) 00:13:43.958 fused_ordering(886) 00:13:43.958 fused_ordering(887) 00:13:43.958 fused_ordering(888) 00:13:43.958 fused_ordering(889) 00:13:43.958 fused_ordering(890) 00:13:43.958 fused_ordering(891) 00:13:43.958 fused_ordering(892) 00:13:43.958 fused_ordering(893) 00:13:43.958 fused_ordering(894) 00:13:43.958 fused_ordering(895) 00:13:43.958 fused_ordering(896) 00:13:43.958 fused_ordering(897) 00:13:43.958 fused_ordering(898) 00:13:43.958 fused_ordering(899) 00:13:43.958 fused_ordering(900) 00:13:43.958 fused_ordering(901) 00:13:43.958 fused_ordering(902) 00:13:43.958 fused_ordering(903) 00:13:43.958 fused_ordering(904) 00:13:43.958 fused_ordering(905) 00:13:43.958 fused_ordering(906) 00:13:43.958 fused_ordering(907) 00:13:43.958 fused_ordering(908) 00:13:43.958 fused_ordering(909) 00:13:43.958 fused_ordering(910) 00:13:43.958 fused_ordering(911) 00:13:43.958 fused_ordering(912) 00:13:43.958 fused_ordering(913) 00:13:43.958 fused_ordering(914) 00:13:43.958 fused_ordering(915) 00:13:43.958 fused_ordering(916) 00:13:43.958 fused_ordering(917) 00:13:43.958 fused_ordering(918) 00:13:43.958 fused_ordering(919) 00:13:43.958 fused_ordering(920) 00:13:43.958 fused_ordering(921) 00:13:43.958 fused_ordering(922) 00:13:43.958 fused_ordering(923) 00:13:43.958 fused_ordering(924) 00:13:43.958 fused_ordering(925) 00:13:43.958 fused_ordering(926) 00:13:43.958 fused_ordering(927) 00:13:43.958 fused_ordering(928) 00:13:43.958 fused_ordering(929) 00:13:43.958 fused_ordering(930) 00:13:43.958 fused_ordering(931) 00:13:43.958 fused_ordering(932) 00:13:43.958 fused_ordering(933) 00:13:43.958 fused_ordering(934) 00:13:43.958 fused_ordering(935) 00:13:43.958 fused_ordering(936) 00:13:43.958 fused_ordering(937) 00:13:43.958 fused_ordering(938) 00:13:43.958 fused_ordering(939) 00:13:43.958 fused_ordering(940) 00:13:43.958 fused_ordering(941) 00:13:43.958 fused_ordering(942) 00:13:43.958 fused_ordering(943) 00:13:43.958 fused_ordering(944) 00:13:43.958 fused_ordering(945) 00:13:43.958 fused_ordering(946) 00:13:43.958 fused_ordering(947) 00:13:43.958 fused_ordering(948) 00:13:43.958 fused_ordering(949) 00:13:43.958 fused_ordering(950) 00:13:43.958 fused_ordering(951) 00:13:43.958 fused_ordering(952) 00:13:43.958 fused_ordering(953) 00:13:43.958 fused_ordering(954) 00:13:43.958 fused_ordering(955) 00:13:43.958 fused_ordering(956) 00:13:43.958 fused_ordering(957) 00:13:43.958 fused_ordering(958) 00:13:43.958 fused_ordering(959) 00:13:43.958 fused_ordering(960) 00:13:43.958 fused_ordering(961) 00:13:43.958 fused_ordering(962) 00:13:43.958 fused_ordering(963) 00:13:43.958 fused_ordering(964) 00:13:43.958 fused_ordering(965) 00:13:43.958 fused_ordering(966) 00:13:43.958 fused_ordering(967) 00:13:43.958 fused_ordering(968) 00:13:43.958 fused_ordering(969) 00:13:43.958 fused_ordering(970) 00:13:43.958 fused_ordering(971) 00:13:43.958 fused_ordering(972) 00:13:43.958 fused_ordering(973) 00:13:43.958 fused_ordering(974) 00:13:43.958 fused_ordering(975) 00:13:43.958 fused_ordering(976) 00:13:43.958 fused_ordering(977) 00:13:43.958 fused_ordering(978) 00:13:43.958 fused_ordering(979) 00:13:43.958 fused_ordering(980) 00:13:43.958 fused_ordering(981) 00:13:43.958 fused_ordering(982) 00:13:43.958 fused_ordering(983) 00:13:43.958 fused_ordering(984) 00:13:43.958 fused_ordering(985) 00:13:43.958 fused_ordering(986) 00:13:43.958 fused_ordering(987) 00:13:43.958 fused_ordering(988) 00:13:43.958 fused_ordering(989) 00:13:43.959 fused_ordering(990) 00:13:43.959 fused_ordering(991) 00:13:43.959 fused_ordering(992) 00:13:43.959 fused_ordering(993) 00:13:43.959 fused_ordering(994) 00:13:43.959 fused_ordering(995) 00:13:43.959 fused_ordering(996) 00:13:43.959 fused_ordering(997) 00:13:43.959 fused_ordering(998) 00:13:43.959 fused_ordering(999) 00:13:43.959 fused_ordering(1000) 00:13:43.959 fused_ordering(1001) 00:13:43.959 fused_ordering(1002) 00:13:43.959 fused_ordering(1003) 00:13:43.959 fused_ordering(1004) 00:13:43.959 fused_ordering(1005) 00:13:43.959 fused_ordering(1006) 00:13:43.959 fused_ordering(1007) 00:13:43.959 fused_ordering(1008) 00:13:43.959 fused_ordering(1009) 00:13:43.959 fused_ordering(1010) 00:13:43.959 fused_ordering(1011) 00:13:43.959 fused_ordering(1012) 00:13:43.959 fused_ordering(1013) 00:13:43.959 fused_ordering(1014) 00:13:43.959 fused_ordering(1015) 00:13:43.959 fused_ordering(1016) 00:13:43.959 fused_ordering(1017) 00:13:43.959 fused_ordering(1018) 00:13:43.960 fused_ordering(1019) 00:13:43.960 fused_ordering(1020) 00:13:43.960 fused_ordering(1021) 00:13:43.960 fused_ordering(1022) 00:13:43.960 fused_ordering(1023) 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:43.960 rmmod nvme_tcp 00:13:43.960 rmmod nvme_fabrics 00:13:43.960 rmmod nvme_keyring 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2014401 ']' 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2014401 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2014401 ']' 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2014401 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2014401 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2014401' 00:13:43.960 killing process with pid 2014401 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2014401 00:13:43.960 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2014401 00:13:44.220 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:44.220 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:44.220 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:44.220 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.220 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:44.220 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.220 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.220 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.130 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:46.130 00:13:46.130 real 0m13.491s 00:13:46.130 user 0m9.220s 00:13:46.130 sys 0m7.298s 00:13:46.130 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:46.130 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:46.130 ************************************ 00:13:46.130 END TEST nvmf_fused_ordering 00:13:46.130 ************************************ 00:13:46.130 19:49:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:46.130 19:49:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:46.130 19:49:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:46.130 19:49:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:46.130 ************************************ 00:13:46.130 START TEST nvmf_ns_masking 00:13:46.130 ************************************ 00:13:46.130 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:46.390 * Looking for test storage... 00:13:46.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=1c775837-fdcb-4bb7-9846-cb9f8831c14c 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=38283a0b-db0b-450d-9920-5f4f11fa1036 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=98f69c5d-2ba8-407f-812b-87272aa51aa6 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:46.391 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:51.676 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:51.676 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.676 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:51.676 Found net devices under 0000:86:00.0: cvl_0_0 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:51.677 Found net devices under 0000:86:00.1: cvl_0_1 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:51.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:13:51.677 00:13:51.677 --- 10.0.0.2 ping statistics --- 00:13:51.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.677 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:13:51.677 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:13:51.677 00:13:51.677 --- 10.0.0.1 ping statistics --- 00:13:51.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.677 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2018643 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2018643 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2018643 ']' 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:51.677 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:51.677 [2024-07-24 19:49:43.079347] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:13:51.677 [2024-07-24 19:49:43.079392] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.677 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.677 [2024-07-24 19:49:43.135733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.677 [2024-07-24 19:49:43.214682] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.677 [2024-07-24 19:49:43.214718] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.677 [2024-07-24 19:49:43.214724] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.677 [2024-07-24 19:49:43.214731] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.677 [2024-07-24 19:49:43.214736] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.677 [2024-07-24 19:49:43.214757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.617 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:52.617 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:52.617 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.617 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:52.618 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:52.618 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.618 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:52.618 [2024-07-24 19:49:44.063307] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.618 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:52.618 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:52.618 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:52.878 Malloc1 00:13:52.878 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:52.878 Malloc2 00:13:52.878 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:53.138 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:53.398 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.398 [2024-07-24 19:49:44.965225] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.398 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:53.398 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 98f69c5d-2ba8-407f-812b-87272aa51aa6 -a 10.0.0.2 -s 4420 -i 4 00:13:53.658 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:53.658 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:53.658 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:53.658 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:53.658 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:55.568 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:55.828 [ 0]:0x1 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=337a62eb3ded45bda82607f4a78fb25e 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 337a62eb3ded45bda82607f4a78fb25e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.828 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:56.088 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:56.088 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:56.088 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:56.088 [ 0]:0x1 00:13:56.088 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:56.088 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:56.088 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=337a62eb3ded45bda82607f4a78fb25e 00:13:56.088 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 337a62eb3ded45bda82607f4a78fb25e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:56.088 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:56.088 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:56.088 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:56.088 [ 1]:0x2 00:13:56.089 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:56.089 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:56.089 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e83b14732044a1d88fdcd8739c0dca5 00:13:56.089 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e83b14732044a1d88fdcd8739c0dca5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:56.089 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:56.089 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:56.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.348 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.609 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:56.609 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:56.609 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 98f69c5d-2ba8-407f-812b-87272aa51aa6 -a 10.0.0.2 -s 4420 -i 4 00:13:56.869 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:56.869 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:56.869 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.869 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:56.869 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:56.869 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:58.778 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:58.778 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:58.778 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:58.778 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:58.778 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.778 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:58.778 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:58.778 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:59.038 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:59.038 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:59.038 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:59.038 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:59.038 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:59.039 [ 0]:0x2 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e83b14732044a1d88fdcd8739c0dca5 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e83b14732044a1d88fdcd8739c0dca5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.039 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:59.299 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:59.299 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:59.299 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:59.299 [ 0]:0x1 00:13:59.299 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:59.299 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:59.299 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=337a62eb3ded45bda82607f4a78fb25e 00:13:59.299 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 337a62eb3ded45bda82607f4a78fb25e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.299 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:59.299 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:59.299 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:59.299 [ 1]:0x2 00:13:59.299 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:59.299 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:59.299 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e83b14732044a1d88fdcd8739c0dca5 00:13:59.299 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e83b14732044a1d88fdcd8739c0dca5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.299 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:59.558 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:59.559 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:59.559 [ 0]:0x2 00:13:59.559 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:59.559 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:59.559 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e83b14732044a1d88fdcd8739c0dca5 00:13:59.559 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e83b14732044a1d88fdcd8739c0dca5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.559 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:59.559 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:59.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.559 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:59.818 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:59.818 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 98f69c5d-2ba8-407f-812b-87272aa51aa6 -a 10.0.0.2 -s 4420 -i 4 00:14:00.078 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:00.078 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:00.078 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:00.078 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:00.078 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:00.078 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:02.020 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:02.021 [ 0]:0x1 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=337a62eb3ded45bda82607f4a78fb25e 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 337a62eb3ded45bda82607f4a78fb25e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:02.021 [ 1]:0x2 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e83b14732044a1d88fdcd8739c0dca5 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e83b14732044a1d88fdcd8739c0dca5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.021 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:02.281 [ 0]:0x2 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:02.281 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.542 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e83b14732044a1d88fdcd8739c0dca5 00:14:02.542 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e83b14732044a1d88fdcd8739c0dca5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.542 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:02.542 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:02.542 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:02.542 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.542 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.542 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.542 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.542 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.542 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.542 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.542 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:02.542 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:02.542 [2024-07-24 19:49:54.050936] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:02.542 request: 00:14:02.542 { 00:14:02.542 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.542 "nsid": 2, 00:14:02.542 "host": "nqn.2016-06.io.spdk:host1", 00:14:02.542 "method": "nvmf_ns_remove_host", 00:14:02.542 "req_id": 1 00:14:02.542 } 00:14:02.542 Got JSON-RPC error response 00:14:02.542 response: 00:14:02.542 { 00:14:02.542 "code": -32602, 00:14:02.542 "message": "Invalid parameters" 00:14:02.542 } 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.542 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:02.542 [ 0]:0x2 00:14:02.802 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.802 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:02.802 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e83b14732044a1d88fdcd8739c0dca5 00:14:02.802 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e83b14732044a1d88fdcd8739c0dca5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.802 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:02.802 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.802 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2020641 00:14:02.802 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.802 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2020641 /var/tmp/host.sock 00:14:02.802 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:02.802 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2020641 ']' 00:14:02.803 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:02.803 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.803 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:02.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:02.803 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.803 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:02.803 [2024-07-24 19:49:54.256534] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:14:02.803 [2024-07-24 19:49:54.256582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020641 ] 00:14:02.803 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.803 [2024-07-24 19:49:54.311271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.803 [2024-07-24 19:49:54.385419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.742 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.742 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:03.742 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.742 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:04.002 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 1c775837-fdcb-4bb7-9846-cb9f8831c14c 00:14:04.002 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:04.002 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 1C775837FDCB4BB79846CB9F8831C14C -i 00:14:04.002 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 38283a0b-db0b-450d-9920-5f4f11fa1036 00:14:04.002 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:04.002 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 38283A0BDB0B450D99205F4F11FA1036 -i 00:14:04.261 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:04.521 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:04.521 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:04.521 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:05.090 nvme0n1 00:14:05.090 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:05.090 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:05.350 nvme1n2 00:14:05.350 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:05.350 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:05.350 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:05.350 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:05.350 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:05.610 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:05.610 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:05.610 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:05.610 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:05.610 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 1c775837-fdcb-4bb7-9846-cb9f8831c14c == \1\c\7\7\5\8\3\7\-\f\d\c\b\-\4\b\b\7\-\9\8\4\6\-\c\b\9\f\8\8\3\1\c\1\4\c ]] 00:14:05.610 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:05.610 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:05.610 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:05.870 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 38283a0b-db0b-450d-9920-5f4f11fa1036 == \3\8\2\8\3\a\0\b\-\d\b\0\b\-\4\5\0\d\-\9\9\2\0\-\5\f\4\f\1\1\f\a\1\0\3\6 ]] 00:14:05.870 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2020641 00:14:05.870 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2020641 ']' 00:14:05.870 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2020641 00:14:05.870 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:05.870 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:05.870 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2020641 00:14:05.870 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:05.870 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:05.870 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2020641' 00:14:05.870 killing process with pid 2020641 00:14:05.870 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2020641 00:14:05.870 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2020641 00:14:06.138 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:06.397 rmmod nvme_tcp 00:14:06.397 rmmod nvme_fabrics 00:14:06.397 rmmod nvme_keyring 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2018643 ']' 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2018643 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2018643 ']' 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2018643 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2018643 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2018643' 00:14:06.397 killing process with pid 2018643 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2018643 00:14:06.397 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2018643 00:14:06.657 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:06.657 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:06.657 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:06.657 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.657 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:06.657 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.657 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.657 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:09.199 00:14:09.199 real 0m22.549s 00:14:09.199 user 0m24.389s 00:14:09.199 sys 0m5.938s 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:09.199 ************************************ 00:14:09.199 END TEST nvmf_ns_masking 00:14:09.199 ************************************ 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.199 ************************************ 00:14:09.199 START TEST nvmf_nvme_cli 00:14:09.199 ************************************ 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:09.199 * Looking for test storage... 00:14:09.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.199 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.200 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.200 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:09.200 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:09.200 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:09.200 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:14.483 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:14.483 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:14.483 Found net devices under 0000:86:00.0: cvl_0_0 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:14.483 Found net devices under 0000:86:00.1: cvl_0_1 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.483 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:14.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:14:14.483 00:14:14.483 --- 10.0.0.2 ping statistics --- 00:14:14.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.483 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:14:14.484 00:14:14.484 --- 10.0.0.1 ping statistics --- 00:14:14.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.484 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2024848 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2024848 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2024848 ']' 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:14.484 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:14.484 [2024-07-24 19:50:05.864277] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:14:14.484 [2024-07-24 19:50:05.864319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.484 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.484 [2024-07-24 19:50:05.921332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:14.484 [2024-07-24 19:50:06.003804] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.484 [2024-07-24 19:50:06.003843] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.484 [2024-07-24 19:50:06.003852] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.484 [2024-07-24 19:50:06.003858] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.484 [2024-07-24 19:50:06.003863] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.484 [2024-07-24 19:50:06.003906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.484 [2024-07-24 19:50:06.004003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.484 [2024-07-24 19:50:06.004092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.484 [2024-07-24 19:50:06.004093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.423 [2024-07-24 19:50:06.717543] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.423 Malloc0 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.423 Malloc1 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.423 [2024-07-24 19:50:06.795234] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.423 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:15.423 00:14:15.423 Discovery Log Number of Records 2, Generation counter 2 00:14:15.423 =====Discovery Log Entry 0====== 00:14:15.423 trtype: tcp 00:14:15.423 adrfam: ipv4 00:14:15.423 subtype: current discovery subsystem 00:14:15.423 treq: not required 00:14:15.423 portid: 0 00:14:15.423 trsvcid: 4420 00:14:15.424 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:15.424 traddr: 10.0.0.2 00:14:15.424 eflags: explicit discovery connections, duplicate discovery information 00:14:15.424 sectype: none 00:14:15.424 =====Discovery Log Entry 1====== 00:14:15.424 trtype: tcp 00:14:15.424 adrfam: ipv4 00:14:15.424 subtype: nvme subsystem 00:14:15.424 treq: not required 00:14:15.424 portid: 0 00:14:15.424 trsvcid: 4420 00:14:15.424 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:15.424 traddr: 10.0.0.2 00:14:15.424 eflags: none 00:14:15.424 sectype: none 00:14:15.424 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:15.424 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:15.424 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:15.424 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:15.424 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:15.424 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:15.424 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:15.424 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:15.424 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:15.424 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:15.424 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:16.804 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:16.804 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:16.804 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.804 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:16.804 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:16.804 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:18.714 /dev/nvme0n1 ]] 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.714 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.714 rmmod nvme_tcp 00:14:18.714 rmmod nvme_fabrics 00:14:18.714 rmmod nvme_keyring 00:14:18.974 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.974 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:18.974 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:18.974 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2024848 ']' 00:14:18.974 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2024848 00:14:18.974 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2024848 ']' 00:14:18.974 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2024848 00:14:18.974 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:18.974 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:18.974 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2024848 00:14:18.974 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:18.974 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:18.974 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2024848' 00:14:18.974 killing process with pid 2024848 00:14:18.974 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2024848 00:14:18.974 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2024848 00:14:19.234 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:19.234 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:19.234 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:19.234 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.234 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:19.234 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.234 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.234 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.142 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:21.142 00:14:21.142 real 0m12.334s 00:14:21.142 user 0m19.882s 00:14:21.142 sys 0m4.532s 00:14:21.142 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:21.142 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.142 ************************************ 00:14:21.142 END TEST nvmf_nvme_cli 00:14:21.142 ************************************ 00:14:21.142 19:50:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:21.142 19:50:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:21.142 19:50:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:21.142 19:50:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:21.142 19:50:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:21.142 ************************************ 00:14:21.142 START TEST nvmf_vfio_user 00:14:21.142 ************************************ 00:14:21.142 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:21.403 * Looking for test storage... 00:14:21.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2026121 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2026121' 00:14:21.403 Process pid: 2026121 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2026121 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2026121 ']' 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:21.403 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:21.403 [2024-07-24 19:50:12.911552] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:14:21.403 [2024-07-24 19:50:12.911606] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.403 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.403 [2024-07-24 19:50:12.967197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:21.663 [2024-07-24 19:50:13.049210] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.663 [2024-07-24 19:50:13.049244] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.664 [2024-07-24 19:50:13.049251] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.664 [2024-07-24 19:50:13.049257] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.664 [2024-07-24 19:50:13.049263] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.664 [2024-07-24 19:50:13.049303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.664 [2024-07-24 19:50:13.049397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.664 [2024-07-24 19:50:13.049488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.664 [2024-07-24 19:50:13.049489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.231 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:22.231 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:22.231 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:23.166 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:23.424 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:23.424 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:23.424 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:23.424 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:23.424 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:23.722 Malloc1 00:14:23.722 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:24.010 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:24.010 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:24.269 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:24.269 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:24.269 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:24.269 Malloc2 00:14:24.528 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:24.529 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:24.788 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:25.050 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:25.050 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:25.050 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:25.050 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:25.050 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:25.050 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:25.050 [2024-07-24 19:50:16.440853] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:14:25.050 [2024-07-24 19:50:16.440886] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2026644 ] 00:14:25.050 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.050 [2024-07-24 19:50:16.470573] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:25.050 [2024-07-24 19:50:16.480397] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:25.050 [2024-07-24 19:50:16.480417] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4e6e793000 00:14:25.050 [2024-07-24 19:50:16.481399] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:25.050 [2024-07-24 19:50:16.482398] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:25.050 [2024-07-24 19:50:16.483405] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:25.050 [2024-07-24 19:50:16.484412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:25.050 [2024-07-24 19:50:16.485417] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:25.050 [2024-07-24 19:50:16.486426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:25.050 [2024-07-24 19:50:16.487430] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:25.050 [2024-07-24 19:50:16.488431] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:25.050 [2024-07-24 19:50:16.489436] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:25.050 [2024-07-24 19:50:16.489444] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4e6e788000 00:14:25.050 [2024-07-24 19:50:16.490387] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:25.050 [2024-07-24 19:50:16.498996] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:25.050 [2024-07-24 19:50:16.499017] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:25.050 [2024-07-24 19:50:16.507542] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:25.050 [2024-07-24 19:50:16.507582] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:25.050 [2024-07-24 19:50:16.507654] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:25.050 [2024-07-24 19:50:16.507668] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:25.050 [2024-07-24 19:50:16.507673] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:25.050 [2024-07-24 19:50:16.508542] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:25.050 [2024-07-24 19:50:16.508553] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:25.050 [2024-07-24 19:50:16.508562] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:25.050 [2024-07-24 19:50:16.509544] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:25.050 [2024-07-24 19:50:16.509552] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:25.050 [2024-07-24 19:50:16.509559] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:25.050 [2024-07-24 19:50:16.510547] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:25.050 [2024-07-24 19:50:16.510554] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:25.050 [2024-07-24 19:50:16.511558] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:25.050 [2024-07-24 19:50:16.511567] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:25.050 [2024-07-24 19:50:16.511571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:25.050 [2024-07-24 19:50:16.511576] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:25.050 [2024-07-24 19:50:16.511682] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:25.050 [2024-07-24 19:50:16.511686] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:25.050 [2024-07-24 19:50:16.511690] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:25.050 [2024-07-24 19:50:16.512562] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:25.050 [2024-07-24 19:50:16.513565] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:25.050 [2024-07-24 19:50:16.514572] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:25.050 [2024-07-24 19:50:16.515574] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:25.050 [2024-07-24 19:50:16.515638] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:25.050 [2024-07-24 19:50:16.516585] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:25.050 [2024-07-24 19:50:16.516592] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:25.050 [2024-07-24 19:50:16.516596] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:25.050 [2024-07-24 19:50:16.516613] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:25.050 [2024-07-24 19:50:16.516623] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:25.050 [2024-07-24 19:50:16.516637] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:25.050 [2024-07-24 19:50:16.516641] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:25.050 [2024-07-24 19:50:16.516646] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:25.050 [2024-07-24 19:50:16.516659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:25.050 [2024-07-24 19:50:16.516697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:25.050 [2024-07-24 19:50:16.516705] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:25.050 [2024-07-24 19:50:16.516709] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:25.050 [2024-07-24 19:50:16.516713] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:25.050 [2024-07-24 19:50:16.516716] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:25.050 [2024-07-24 19:50:16.516721] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:25.050 [2024-07-24 19:50:16.516724] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:25.050 [2024-07-24 19:50:16.516728] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:25.050 [2024-07-24 19:50:16.516735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:25.050 [2024-07-24 19:50:16.516747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:25.051 [2024-07-24 19:50:16.516761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:25.051 [2024-07-24 19:50:16.516772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.051 [2024-07-24 19:50:16.516780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.051 [2024-07-24 19:50:16.516787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.051 [2024-07-24 19:50:16.516794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.051 [2024-07-24 19:50:16.516798] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.516805] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.516814] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:25.051 [2024-07-24 19:50:16.516820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:25.051 [2024-07-24 19:50:16.516824] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:25.051 [2024-07-24 19:50:16.516829] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.516836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.516841] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.516850] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:25.051 [2024-07-24 19:50:16.516863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:25.051 [2024-07-24 19:50:16.516914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.516921] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.516928] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:25.051 [2024-07-24 19:50:16.516931] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:25.051 [2024-07-24 19:50:16.516934] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:25.051 [2024-07-24 19:50:16.516940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:25.051 [2024-07-24 19:50:16.516949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:25.051 [2024-07-24 19:50:16.516958] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:25.051 [2024-07-24 19:50:16.516965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.516972] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.516978] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:25.051 [2024-07-24 19:50:16.516981] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:25.051 [2024-07-24 19:50:16.516984] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:25.051 [2024-07-24 19:50:16.516990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:25.051 [2024-07-24 19:50:16.517005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:25.051 [2024-07-24 19:50:16.517017] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.517024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.517030] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:25.051 [2024-07-24 19:50:16.517034] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:25.051 [2024-07-24 19:50:16.517037] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:25.051 [2024-07-24 19:50:16.517045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:25.051 [2024-07-24 19:50:16.517055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:25.051 [2024-07-24 19:50:16.517062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.517067] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.517074] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.517082] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.517087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.517091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.517095] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:25.051 [2024-07-24 19:50:16.517099] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:25.051 [2024-07-24 19:50:16.517103] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:25.051 [2024-07-24 19:50:16.517119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:25.051 [2024-07-24 19:50:16.517128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:25.051 [2024-07-24 19:50:16.517138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:25.051 [2024-07-24 19:50:16.517146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:25.051 [2024-07-24 19:50:16.517156] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:25.051 [2024-07-24 19:50:16.517167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:25.051 [2024-07-24 19:50:16.517177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:25.051 [2024-07-24 19:50:16.517186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:25.051 [2024-07-24 19:50:16.517198] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:25.051 [2024-07-24 19:50:16.517202] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:25.051 [2024-07-24 19:50:16.517205] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:25.051 [2024-07-24 19:50:16.517208] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:25.051 [2024-07-24 19:50:16.517211] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:25.051 [2024-07-24 19:50:16.517216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:25.051 [2024-07-24 19:50:16.517222] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:25.051 [2024-07-24 19:50:16.517226] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:25.051 [2024-07-24 19:50:16.517229] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:25.051 [2024-07-24 19:50:16.517234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:25.051 [2024-07-24 19:50:16.517240] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:25.051 [2024-07-24 19:50:16.517244] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:25.051 [2024-07-24 19:50:16.517247] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:25.051 [2024-07-24 19:50:16.517253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:25.051 [2024-07-24 19:50:16.517260] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:25.051 [2024-07-24 19:50:16.517263] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:25.051 [2024-07-24 19:50:16.517266] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:25.051 [2024-07-24 19:50:16.517271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:25.051 [2024-07-24 19:50:16.517277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:25.051 [2024-07-24 19:50:16.517290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:25.051 [2024-07-24 19:50:16.517299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:25.051 [2024-07-24 19:50:16.517306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:25.051 ===================================================== 00:14:25.051 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:25.052 ===================================================== 00:14:25.052 Controller Capabilities/Features 00:14:25.052 ================================ 00:14:25.052 Vendor ID: 4e58 00:14:25.052 Subsystem Vendor ID: 4e58 00:14:25.052 Serial Number: SPDK1 00:14:25.052 Model Number: SPDK bdev Controller 00:14:25.052 Firmware Version: 24.09 00:14:25.052 Recommended Arb Burst: 6 00:14:25.052 IEEE OUI Identifier: 8d 6b 50 00:14:25.052 Multi-path I/O 00:14:25.052 May have multiple subsystem ports: Yes 00:14:25.052 May have multiple controllers: Yes 00:14:25.052 Associated with SR-IOV VF: No 00:14:25.052 Max Data Transfer Size: 131072 00:14:25.052 Max Number of Namespaces: 32 00:14:25.052 Max Number of I/O Queues: 127 00:14:25.052 NVMe Specification Version (VS): 1.3 00:14:25.052 NVMe Specification Version (Identify): 1.3 00:14:25.052 Maximum Queue Entries: 256 00:14:25.052 Contiguous Queues Required: Yes 00:14:25.052 Arbitration Mechanisms Supported 00:14:25.052 Weighted Round Robin: Not Supported 00:14:25.052 Vendor Specific: Not Supported 00:14:25.052 Reset Timeout: 15000 ms 00:14:25.052 Doorbell Stride: 4 bytes 00:14:25.052 NVM Subsystem Reset: Not Supported 00:14:25.052 Command Sets Supported 00:14:25.052 NVM Command Set: Supported 00:14:25.052 Boot Partition: Not Supported 00:14:25.052 Memory Page Size Minimum: 4096 bytes 00:14:25.052 Memory Page Size Maximum: 4096 bytes 00:14:25.052 Persistent Memory Region: Not Supported 00:14:25.052 Optional Asynchronous Events Supported 00:14:25.052 Namespace Attribute Notices: Supported 00:14:25.052 Firmware Activation Notices: Not Supported 00:14:25.052 ANA Change Notices: Not Supported 00:14:25.052 PLE Aggregate Log Change Notices: Not Supported 00:14:25.052 LBA Status Info Alert Notices: Not Supported 00:14:25.052 EGE Aggregate Log Change Notices: Not Supported 00:14:25.052 Normal NVM Subsystem Shutdown event: Not Supported 00:14:25.052 Zone Descriptor Change Notices: Not Supported 00:14:25.052 Discovery Log Change Notices: Not Supported 00:14:25.052 Controller Attributes 00:14:25.052 128-bit Host Identifier: Supported 00:14:25.052 Non-Operational Permissive Mode: Not Supported 00:14:25.052 NVM Sets: Not Supported 00:14:25.052 Read Recovery Levels: Not Supported 00:14:25.052 Endurance Groups: Not Supported 00:14:25.052 Predictable Latency Mode: Not Supported 00:14:25.052 Traffic Based Keep ALive: Not Supported 00:14:25.052 Namespace Granularity: Not Supported 00:14:25.052 SQ Associations: Not Supported 00:14:25.052 UUID List: Not Supported 00:14:25.052 Multi-Domain Subsystem: Not Supported 00:14:25.052 Fixed Capacity Management: Not Supported 00:14:25.052 Variable Capacity Management: Not Supported 00:14:25.052 Delete Endurance Group: Not Supported 00:14:25.052 Delete NVM Set: Not Supported 00:14:25.052 Extended LBA Formats Supported: Not Supported 00:14:25.052 Flexible Data Placement Supported: Not Supported 00:14:25.052 00:14:25.052 Controller Memory Buffer Support 00:14:25.052 ================================ 00:14:25.052 Supported: No 00:14:25.052 00:14:25.052 Persistent Memory Region Support 00:14:25.052 ================================ 00:14:25.052 Supported: No 00:14:25.052 00:14:25.052 Admin Command Set Attributes 00:14:25.052 ============================ 00:14:25.052 Security Send/Receive: Not Supported 00:14:25.052 Format NVM: Not Supported 00:14:25.052 Firmware Activate/Download: Not Supported 00:14:25.052 Namespace Management: Not Supported 00:14:25.052 Device Self-Test: Not Supported 00:14:25.052 Directives: Not Supported 00:14:25.052 NVMe-MI: Not Supported 00:14:25.052 Virtualization Management: Not Supported 00:14:25.052 Doorbell Buffer Config: Not Supported 00:14:25.052 Get LBA Status Capability: Not Supported 00:14:25.052 Command & Feature Lockdown Capability: Not Supported 00:14:25.052 Abort Command Limit: 4 00:14:25.052 Async Event Request Limit: 4 00:14:25.052 Number of Firmware Slots: N/A 00:14:25.052 Firmware Slot 1 Read-Only: N/A 00:14:25.052 Firmware Activation Without Reset: N/A 00:14:25.052 Multiple Update Detection Support: N/A 00:14:25.052 Firmware Update Granularity: No Information Provided 00:14:25.052 Per-Namespace SMART Log: No 00:14:25.052 Asymmetric Namespace Access Log Page: Not Supported 00:14:25.052 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:25.052 Command Effects Log Page: Supported 00:14:25.052 Get Log Page Extended Data: Supported 00:14:25.052 Telemetry Log Pages: Not Supported 00:14:25.052 Persistent Event Log Pages: Not Supported 00:14:25.052 Supported Log Pages Log Page: May Support 00:14:25.052 Commands Supported & Effects Log Page: Not Supported 00:14:25.052 Feature Identifiers & Effects Log Page:May Support 00:14:25.052 NVMe-MI Commands & Effects Log Page: May Support 00:14:25.052 Data Area 4 for Telemetry Log: Not Supported 00:14:25.052 Error Log Page Entries Supported: 128 00:14:25.052 Keep Alive: Supported 00:14:25.052 Keep Alive Granularity: 10000 ms 00:14:25.052 00:14:25.052 NVM Command Set Attributes 00:14:25.052 ========================== 00:14:25.052 Submission Queue Entry Size 00:14:25.052 Max: 64 00:14:25.052 Min: 64 00:14:25.052 Completion Queue Entry Size 00:14:25.052 Max: 16 00:14:25.052 Min: 16 00:14:25.052 Number of Namespaces: 32 00:14:25.052 Compare Command: Supported 00:14:25.052 Write Uncorrectable Command: Not Supported 00:14:25.052 Dataset Management Command: Supported 00:14:25.052 Write Zeroes Command: Supported 00:14:25.052 Set Features Save Field: Not Supported 00:14:25.052 Reservations: Not Supported 00:14:25.052 Timestamp: Not Supported 00:14:25.052 Copy: Supported 00:14:25.052 Volatile Write Cache: Present 00:14:25.052 Atomic Write Unit (Normal): 1 00:14:25.052 Atomic Write Unit (PFail): 1 00:14:25.052 Atomic Compare & Write Unit: 1 00:14:25.052 Fused Compare & Write: Supported 00:14:25.052 Scatter-Gather List 00:14:25.052 SGL Command Set: Supported (Dword aligned) 00:14:25.052 SGL Keyed: Not Supported 00:14:25.052 SGL Bit Bucket Descriptor: Not Supported 00:14:25.052 SGL Metadata Pointer: Not Supported 00:14:25.052 Oversized SGL: Not Supported 00:14:25.052 SGL Metadata Address: Not Supported 00:14:25.052 SGL Offset: Not Supported 00:14:25.052 Transport SGL Data Block: Not Supported 00:14:25.052 Replay Protected Memory Block: Not Supported 00:14:25.052 00:14:25.052 Firmware Slot Information 00:14:25.052 ========================= 00:14:25.052 Active slot: 1 00:14:25.052 Slot 1 Firmware Revision: 24.09 00:14:25.052 00:14:25.052 00:14:25.052 Commands Supported and Effects 00:14:25.052 ============================== 00:14:25.052 Admin Commands 00:14:25.052 -------------- 00:14:25.052 Get Log Page (02h): Supported 00:14:25.052 Identify (06h): Supported 00:14:25.052 Abort (08h): Supported 00:14:25.052 Set Features (09h): Supported 00:14:25.052 Get Features (0Ah): Supported 00:14:25.052 Asynchronous Event Request (0Ch): Supported 00:14:25.052 Keep Alive (18h): Supported 00:14:25.052 I/O Commands 00:14:25.052 ------------ 00:14:25.052 Flush (00h): Supported LBA-Change 00:14:25.052 Write (01h): Supported LBA-Change 00:14:25.052 Read (02h): Supported 00:14:25.052 Compare (05h): Supported 00:14:25.052 Write Zeroes (08h): Supported LBA-Change 00:14:25.052 Dataset Management (09h): Supported LBA-Change 00:14:25.052 Copy (19h): Supported LBA-Change 00:14:25.052 00:14:25.052 Error Log 00:14:25.052 ========= 00:14:25.052 00:14:25.052 Arbitration 00:14:25.052 =========== 00:14:25.052 Arbitration Burst: 1 00:14:25.052 00:14:25.052 Power Management 00:14:25.052 ================ 00:14:25.052 Number of Power States: 1 00:14:25.052 Current Power State: Power State #0 00:14:25.052 Power State #0: 00:14:25.052 Max Power: 0.00 W 00:14:25.052 Non-Operational State: Operational 00:14:25.052 Entry Latency: Not Reported 00:14:25.052 Exit Latency: Not Reported 00:14:25.052 Relative Read Throughput: 0 00:14:25.052 Relative Read Latency: 0 00:14:25.052 Relative Write Throughput: 0 00:14:25.052 Relative Write Latency: 0 00:14:25.052 Idle Power: Not Reported 00:14:25.052 Active Power: Not Reported 00:14:25.052 Non-Operational Permissive Mode: Not Supported 00:14:25.052 00:14:25.053 Health Information 00:14:25.053 ================== 00:14:25.053 Critical Warnings: 00:14:25.053 Available Spare Space: OK 00:14:25.053 Temperature: OK 00:14:25.053 Device Reliability: OK 00:14:25.053 Read Only: No 00:14:25.053 Volatile Memory Backup: OK 00:14:25.053 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:25.053 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:25.053 Available Spare: 0% 00:14:25.053 Available Sp[2024-07-24 19:50:16.517391] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:25.053 [2024-07-24 19:50:16.517398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:25.053 [2024-07-24 19:50:16.517420] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:25.053 [2024-07-24 19:50:16.517428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.053 [2024-07-24 19:50:16.517434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.053 [2024-07-24 19:50:16.517439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.053 [2024-07-24 19:50:16.517444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.053 [2024-07-24 19:50:16.517590] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:25.053 [2024-07-24 19:50:16.517599] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:25.053 [2024-07-24 19:50:16.518589] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:25.053 [2024-07-24 19:50:16.518636] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:25.053 [2024-07-24 19:50:16.518642] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:25.053 [2024-07-24 19:50:16.519602] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:25.053 [2024-07-24 19:50:16.519612] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:25.053 [2024-07-24 19:50:16.519660] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:25.053 [2024-07-24 19:50:16.521633] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:25.053 are Threshold: 0% 00:14:25.053 Life Percentage Used: 0% 00:14:25.053 Data Units Read: 0 00:14:25.053 Data Units Written: 0 00:14:25.053 Host Read Commands: 0 00:14:25.053 Host Write Commands: 0 00:14:25.053 Controller Busy Time: 0 minutes 00:14:25.053 Power Cycles: 0 00:14:25.053 Power On Hours: 0 hours 00:14:25.053 Unsafe Shutdowns: 0 00:14:25.053 Unrecoverable Media Errors: 0 00:14:25.053 Lifetime Error Log Entries: 0 00:14:25.053 Warning Temperature Time: 0 minutes 00:14:25.053 Critical Temperature Time: 0 minutes 00:14:25.053 00:14:25.053 Number of Queues 00:14:25.053 ================ 00:14:25.053 Number of I/O Submission Queues: 127 00:14:25.053 Number of I/O Completion Queues: 127 00:14:25.053 00:14:25.053 Active Namespaces 00:14:25.053 ================= 00:14:25.053 Namespace ID:1 00:14:25.053 Error Recovery Timeout: Unlimited 00:14:25.053 Command Set Identifier: NVM (00h) 00:14:25.053 Deallocate: Supported 00:14:25.053 Deallocated/Unwritten Error: Not Supported 00:14:25.053 Deallocated Read Value: Unknown 00:14:25.053 Deallocate in Write Zeroes: Not Supported 00:14:25.053 Deallocated Guard Field: 0xFFFF 00:14:25.053 Flush: Supported 00:14:25.053 Reservation: Supported 00:14:25.053 Namespace Sharing Capabilities: Multiple Controllers 00:14:25.053 Size (in LBAs): 131072 (0GiB) 00:14:25.053 Capacity (in LBAs): 131072 (0GiB) 00:14:25.053 Utilization (in LBAs): 131072 (0GiB) 00:14:25.053 NGUID: 462FBF8AFB8946C988A59D7E956A9134 00:14:25.053 UUID: 462fbf8a-fb89-46c9-88a5-9d7e956a9134 00:14:25.053 Thin Provisioning: Not Supported 00:14:25.053 Per-NS Atomic Units: Yes 00:14:25.053 Atomic Boundary Size (Normal): 0 00:14:25.053 Atomic Boundary Size (PFail): 0 00:14:25.053 Atomic Boundary Offset: 0 00:14:25.053 Maximum Single Source Range Length: 65535 00:14:25.053 Maximum Copy Length: 65535 00:14:25.053 Maximum Source Range Count: 1 00:14:25.053 NGUID/EUI64 Never Reused: No 00:14:25.053 Namespace Write Protected: No 00:14:25.053 Number of LBA Formats: 1 00:14:25.053 Current LBA Format: LBA Format #00 00:14:25.053 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:25.053 00:14:25.053 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:25.053 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.313 [2024-07-24 19:50:16.733805] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:30.632 Initializing NVMe Controllers 00:14:30.632 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:30.632 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:30.632 Initialization complete. Launching workers. 00:14:30.632 ======================================================== 00:14:30.632 Latency(us) 00:14:30.632 Device Information : IOPS MiB/s Average min max 00:14:30.632 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39899.04 155.86 3207.91 954.88 9114.30 00:14:30.632 ======================================================== 00:14:30.632 Total : 39899.04 155.86 3207.91 954.88 9114.30 00:14:30.632 00:14:30.632 [2024-07-24 19:50:21.755100] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:30.632 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:30.632 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.632 [2024-07-24 19:50:21.977153] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:35.908 Initializing NVMe Controllers 00:14:35.908 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:35.908 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:35.908 Initialization complete. Launching workers. 00:14:35.908 ======================================================== 00:14:35.908 Latency(us) 00:14:35.908 Device Information : IOPS MiB/s Average min max 00:14:35.908 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16033.22 62.63 7987.18 5788.80 15491.96 00:14:35.908 ======================================================== 00:14:35.908 Total : 16033.22 62.63 7987.18 5788.80 15491.96 00:14:35.908 00:14:35.908 [2024-07-24 19:50:27.021341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:35.908 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:35.908 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.908 [2024-07-24 19:50:27.221354] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:41.186 [2024-07-24 19:50:32.317441] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:41.186 Initializing NVMe Controllers 00:14:41.186 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:41.186 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:41.186 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:41.186 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:41.186 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:41.186 Initialization complete. Launching workers. 00:14:41.186 Starting thread on core 2 00:14:41.186 Starting thread on core 3 00:14:41.186 Starting thread on core 1 00:14:41.186 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:41.186 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.186 [2024-07-24 19:50:32.599436] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:44.473 [2024-07-24 19:50:35.651729] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:44.473 Initializing NVMe Controllers 00:14:44.473 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:44.473 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:44.473 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:44.473 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:44.473 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:44.473 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:44.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:44.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:44.473 Initialization complete. Launching workers. 00:14:44.473 Starting thread on core 1 with urgent priority queue 00:14:44.473 Starting thread on core 2 with urgent priority queue 00:14:44.473 Starting thread on core 3 with urgent priority queue 00:14:44.473 Starting thread on core 0 with urgent priority queue 00:14:44.473 SPDK bdev Controller (SPDK1 ) core 0: 5166.33 IO/s 19.36 secs/100000 ios 00:14:44.473 SPDK bdev Controller (SPDK1 ) core 1: 6189.33 IO/s 16.16 secs/100000 ios 00:14:44.473 SPDK bdev Controller (SPDK1 ) core 2: 4336.00 IO/s 23.06 secs/100000 ios 00:14:44.473 SPDK bdev Controller (SPDK1 ) core 3: 5054.00 IO/s 19.79 secs/100000 ios 00:14:44.473 ======================================================== 00:14:44.473 00:14:44.473 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:44.473 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.473 [2024-07-24 19:50:35.916526] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:44.473 Initializing NVMe Controllers 00:14:44.473 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:44.473 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:44.473 Namespace ID: 1 size: 0GB 00:14:44.473 Initialization complete. 00:14:44.473 INFO: using host memory buffer for IO 00:14:44.473 Hello world! 00:14:44.473 [2024-07-24 19:50:35.950731] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:44.473 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:44.473 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.731 [2024-07-24 19:50:36.217490] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:45.667 Initializing NVMe Controllers 00:14:45.667 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:45.667 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:45.667 Initialization complete. Launching workers. 00:14:45.667 submit (in ns) avg, min, max = 7188.0, 3261.7, 4190829.6 00:14:45.667 complete (in ns) avg, min, max = 19393.5, 1801.7, 4012383.5 00:14:45.667 00:14:45.667 Submit histogram 00:14:45.667 ================ 00:14:45.667 Range in us Cumulative Count 00:14:45.668 3.256 - 3.270: 0.0062% ( 1) 00:14:45.668 3.270 - 3.283: 0.0123% ( 1) 00:14:45.668 3.297 - 3.311: 0.0185% ( 1) 00:14:45.668 3.311 - 3.325: 0.1418% ( 20) 00:14:45.668 3.325 - 3.339: 1.1533% ( 164) 00:14:45.668 3.339 - 3.353: 4.5452% ( 550) 00:14:45.668 3.353 - 3.367: 9.8982% ( 868) 00:14:45.668 3.367 - 3.381: 16.0715% ( 1001) 00:14:45.668 3.381 - 3.395: 22.4854% ( 1040) 00:14:45.668 3.395 - 3.409: 28.7388% ( 1014) 00:14:45.668 3.409 - 3.423: 34.1042% ( 870) 00:14:45.668 3.423 - 3.437: 39.2414% ( 833) 00:14:45.668 3.437 - 3.450: 44.7549% ( 894) 00:14:45.668 3.450 - 3.464: 49.2445% ( 728) 00:14:45.668 3.464 - 3.478: 53.0558% ( 618) 00:14:45.668 3.478 - 3.492: 58.6247% ( 903) 00:14:45.668 3.492 - 3.506: 65.5011% ( 1115) 00:14:45.668 3.506 - 3.520: 69.9846% ( 727) 00:14:45.668 3.520 - 3.534: 74.5606% ( 742) 00:14:45.668 3.534 - 3.548: 79.5621% ( 811) 00:14:45.668 3.548 - 3.562: 83.0959% ( 573) 00:14:45.668 3.562 - 3.590: 86.3336% ( 525) 00:14:45.668 3.590 - 3.617: 87.3882% ( 171) 00:14:45.668 3.617 - 3.645: 88.3996% ( 164) 00:14:45.668 3.645 - 3.673: 90.1018% ( 276) 00:14:45.668 3.673 - 3.701: 91.8224% ( 279) 00:14:45.668 3.701 - 3.729: 93.5553% ( 281) 00:14:45.668 3.729 - 3.757: 95.3253% ( 287) 00:14:45.668 3.757 - 3.784: 96.9349% ( 261) 00:14:45.668 3.784 - 3.812: 98.1067% ( 190) 00:14:45.668 3.812 - 3.840: 98.7666% ( 107) 00:14:45.668 3.840 - 3.868: 99.1983% ( 70) 00:14:45.668 3.868 - 3.896: 99.4388% ( 39) 00:14:45.668 3.896 - 3.923: 99.5868% ( 24) 00:14:45.668 3.923 - 3.951: 99.6300% ( 7) 00:14:45.668 4.007 - 4.035: 99.6361% ( 1) 00:14:45.668 5.537 - 5.565: 99.6423% ( 1) 00:14:45.668 5.565 - 5.593: 99.6546% ( 2) 00:14:45.668 5.788 - 5.816: 99.6608% ( 1) 00:14:45.668 5.955 - 5.983: 99.6670% ( 1) 00:14:45.668 6.010 - 6.038: 99.6731% ( 1) 00:14:45.668 6.317 - 6.344: 99.6793% ( 1) 00:14:45.668 6.344 - 6.372: 99.6855% ( 1) 00:14:45.668 6.400 - 6.428: 99.6916% ( 1) 00:14:45.668 6.456 - 6.483: 99.6978% ( 1) 00:14:45.668 6.623 - 6.650: 99.7040% ( 1) 00:14:45.668 6.929 - 6.957: 99.7101% ( 1) 00:14:45.668 7.040 - 7.068: 99.7163% ( 1) 00:14:45.668 7.235 - 7.290: 99.7225% ( 1) 00:14:45.668 7.290 - 7.346: 99.7348% ( 2) 00:14:45.668 7.346 - 7.402: 99.7410% ( 1) 00:14:45.668 7.457 - 7.513: 99.7533% ( 2) 00:14:45.668 7.624 - 7.680: 99.7595% ( 1) 00:14:45.668 7.680 - 7.736: 99.7656% ( 1) 00:14:45.668 7.736 - 7.791: 99.7718% ( 1) 00:14:45.668 7.791 - 7.847: 99.7903% ( 3) 00:14:45.668 7.903 - 7.958: 99.8027% ( 2) 00:14:45.668 8.070 - 8.125: 99.8088% ( 1) 00:14:45.668 8.125 - 8.181: 99.8150% ( 1) 00:14:45.668 8.181 - 8.237: 99.8212% ( 1) 00:14:45.668 8.292 - 8.348: 99.8273% ( 1) 00:14:45.668 8.348 - 8.403: 99.8397% ( 2) 00:14:45.668 8.403 - 8.459: 99.8520% ( 2) 00:14:45.668 8.570 - 8.626: 99.8582% ( 1) 00:14:45.668 8.793 - 8.849: 99.8643% ( 1) 00:14:45.668 8.960 - 9.016: 99.8705% ( 1) 00:14:45.668 9.127 - 9.183: 99.8828% ( 2) 00:14:45.668 9.517 - 9.572: 99.8890% ( 1) 00:14:45.668 9.572 - 9.628: 99.8952% ( 1) 00:14:45.668 10.073 - 10.129: 99.9013% ( 1) 00:14:45.668 10.296 - 10.351: 99.9075% ( 1) 00:14:45.668 3675.715 - 3704.209: 99.9137% ( 1) 00:14:45.668 3989.148 - 4017.642: 99.9938% ( 13) 00:14:45.668 4188.605 - 4217.099: 100.0000% ( 1) 00:14:45.668 00:14:45.668 Complete histogram 00:14:45.668 ================== 00:14:45.668 Range in us Cumulative Count 00:14:45.668 1.795 - 1.809: 0.0308% ( 5) 00:14:45.668 1.809 - 1.823: 0.0617% ( 5) 00:14:45.668 1.837 - 1.850: 1.1348% ( 174) 00:14:45.668 1.850 - [2024-07-24 19:50:37.238545] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:45.927 1.864: 22.1832% ( 3413) 00:14:45.927 1.864 - 1.878: 46.0130% ( 3864) 00:14:45.927 1.878 - 1.892: 50.9652% ( 803) 00:14:45.927 1.892 - 1.906: 64.7919% ( 2242) 00:14:45.927 1.906 - 1.920: 87.9309% ( 3752) 00:14:45.927 1.920 - 1.934: 93.6294% ( 924) 00:14:45.927 1.934 - 1.948: 96.6512% ( 490) 00:14:45.927 1.948 - 1.962: 97.4653% ( 132) 00:14:45.927 1.962 - 1.976: 98.0882% ( 101) 00:14:45.927 1.976 - 1.990: 98.7789% ( 112) 00:14:45.927 1.990 - 2.003: 99.0626% ( 46) 00:14:45.927 2.003 - 2.017: 99.2353% ( 28) 00:14:45.927 2.017 - 2.031: 99.2723% ( 6) 00:14:45.927 2.031 - 2.045: 99.3278% ( 9) 00:14:45.927 2.045 - 2.059: 99.3340% ( 1) 00:14:45.927 2.059 - 2.073: 99.3463% ( 2) 00:14:45.927 2.115 - 2.129: 99.3710% ( 4) 00:14:45.927 2.226 - 2.240: 99.3771% ( 1) 00:14:45.927 3.896 - 3.923: 99.3833% ( 1) 00:14:45.927 3.923 - 3.951: 99.3895% ( 1) 00:14:45.927 4.397 - 4.424: 99.3956% ( 1) 00:14:45.927 4.536 - 4.563: 99.4018% ( 1) 00:14:45.927 4.563 - 4.591: 99.4080% ( 1) 00:14:45.927 4.619 - 4.647: 99.4141% ( 1) 00:14:45.927 4.730 - 4.758: 99.4203% ( 1) 00:14:45.927 4.814 - 4.842: 99.4265% ( 1) 00:14:45.927 4.842 - 4.870: 99.4388% ( 2) 00:14:45.927 4.870 - 4.897: 99.4450% ( 1) 00:14:45.927 4.925 - 4.953: 99.4511% ( 1) 00:14:45.927 5.176 - 5.203: 99.4573% ( 1) 00:14:45.927 5.343 - 5.370: 99.4696% ( 2) 00:14:45.927 5.510 - 5.537: 99.4758% ( 1) 00:14:45.927 5.621 - 5.649: 99.4881% ( 2) 00:14:45.927 5.955 - 5.983: 99.4943% ( 1) 00:14:45.927 6.289 - 6.317: 99.5005% ( 1) 00:14:45.927 6.678 - 6.706: 99.5066% ( 1) 00:14:45.927 6.873 - 6.901: 99.5128% ( 1) 00:14:45.927 7.402 - 7.457: 99.5190% ( 1) 00:14:45.927 7.457 - 7.513: 99.5251% ( 1) 00:14:45.927 7.569 - 7.624: 99.5313% ( 1) 00:14:45.927 8.459 - 8.515: 99.5375% ( 1) 00:14:45.927 9.238 - 9.294: 99.5436% ( 1) 00:14:45.927 11.297 - 11.353: 99.5498% ( 1) 00:14:45.927 13.078 - 13.134: 99.5560% ( 1) 00:14:45.927 13.690 - 13.746: 99.5621% ( 1) 00:14:45.927 3989.148 - 4017.642: 100.0000% ( 71) 00:14:45.927 00:14:45.927 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:45.927 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:45.927 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:45.927 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:45.927 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:45.927 [ 00:14:45.927 { 00:14:45.927 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:45.927 "subtype": "Discovery", 00:14:45.927 "listen_addresses": [], 00:14:45.927 "allow_any_host": true, 00:14:45.927 "hosts": [] 00:14:45.927 }, 00:14:45.927 { 00:14:45.927 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:45.927 "subtype": "NVMe", 00:14:45.927 "listen_addresses": [ 00:14:45.927 { 00:14:45.927 "trtype": "VFIOUSER", 00:14:45.927 "adrfam": "IPv4", 00:14:45.927 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:45.927 "trsvcid": "0" 00:14:45.927 } 00:14:45.927 ], 00:14:45.927 "allow_any_host": true, 00:14:45.927 "hosts": [], 00:14:45.927 "serial_number": "SPDK1", 00:14:45.927 "model_number": "SPDK bdev Controller", 00:14:45.927 "max_namespaces": 32, 00:14:45.927 "min_cntlid": 1, 00:14:45.927 "max_cntlid": 65519, 00:14:45.927 "namespaces": [ 00:14:45.927 { 00:14:45.927 "nsid": 1, 00:14:45.927 "bdev_name": "Malloc1", 00:14:45.927 "name": "Malloc1", 00:14:45.927 "nguid": "462FBF8AFB8946C988A59D7E956A9134", 00:14:45.927 "uuid": "462fbf8a-fb89-46c9-88a5-9d7e956a9134" 00:14:45.927 } 00:14:45.927 ] 00:14:45.927 }, 00:14:45.927 { 00:14:45.927 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:45.927 "subtype": "NVMe", 00:14:45.927 "listen_addresses": [ 00:14:45.927 { 00:14:45.927 "trtype": "VFIOUSER", 00:14:45.927 "adrfam": "IPv4", 00:14:45.927 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:45.927 "trsvcid": "0" 00:14:45.927 } 00:14:45.927 ], 00:14:45.927 "allow_any_host": true, 00:14:45.927 "hosts": [], 00:14:45.927 "serial_number": "SPDK2", 00:14:45.927 "model_number": "SPDK bdev Controller", 00:14:45.927 "max_namespaces": 32, 00:14:45.927 "min_cntlid": 1, 00:14:45.927 "max_cntlid": 65519, 00:14:45.927 "namespaces": [ 00:14:45.927 { 00:14:45.927 "nsid": 1, 00:14:45.927 "bdev_name": "Malloc2", 00:14:45.927 "name": "Malloc2", 00:14:45.927 "nguid": "B518B0CD7E834CC491BECF8B6244B40A", 00:14:45.927 "uuid": "b518b0cd-7e83-4cc4-91be-cf8b6244b40a" 00:14:45.927 } 00:14:45.927 ] 00:14:45.927 } 00:14:45.927 ] 00:14:45.927 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:45.927 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:45.927 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2030092 00:14:45.927 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:45.928 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:45.928 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:45.928 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:45.928 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:45.928 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:45.928 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:45.928 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.186 [2024-07-24 19:50:37.593787] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.186 Malloc3 00:14:46.186 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:46.445 [2024-07-24 19:50:37.822556] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.445 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:46.445 Asynchronous Event Request test 00:14:46.445 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:46.445 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:46.445 Registering asynchronous event callbacks... 00:14:46.445 Starting namespace attribute notice tests for all controllers... 00:14:46.445 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:46.445 aer_cb - Changed Namespace 00:14:46.445 Cleaning up... 00:14:46.445 [ 00:14:46.445 { 00:14:46.445 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:46.445 "subtype": "Discovery", 00:14:46.445 "listen_addresses": [], 00:14:46.445 "allow_any_host": true, 00:14:46.445 "hosts": [] 00:14:46.445 }, 00:14:46.445 { 00:14:46.445 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:46.445 "subtype": "NVMe", 00:14:46.445 "listen_addresses": [ 00:14:46.445 { 00:14:46.445 "trtype": "VFIOUSER", 00:14:46.445 "adrfam": "IPv4", 00:14:46.445 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:46.445 "trsvcid": "0" 00:14:46.445 } 00:14:46.445 ], 00:14:46.445 "allow_any_host": true, 00:14:46.445 "hosts": [], 00:14:46.445 "serial_number": "SPDK1", 00:14:46.445 "model_number": "SPDK bdev Controller", 00:14:46.445 "max_namespaces": 32, 00:14:46.445 "min_cntlid": 1, 00:14:46.445 "max_cntlid": 65519, 00:14:46.445 "namespaces": [ 00:14:46.445 { 00:14:46.445 "nsid": 1, 00:14:46.445 "bdev_name": "Malloc1", 00:14:46.445 "name": "Malloc1", 00:14:46.445 "nguid": "462FBF8AFB8946C988A59D7E956A9134", 00:14:46.445 "uuid": "462fbf8a-fb89-46c9-88a5-9d7e956a9134" 00:14:46.445 }, 00:14:46.445 { 00:14:46.445 "nsid": 2, 00:14:46.445 "bdev_name": "Malloc3", 00:14:46.445 "name": "Malloc3", 00:14:46.445 "nguid": "B6325A0E85ED4561A96B41E002437CF7", 00:14:46.445 "uuid": "b6325a0e-85ed-4561-a96b-41e002437cf7" 00:14:46.445 } 00:14:46.445 ] 00:14:46.445 }, 00:14:46.445 { 00:14:46.445 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:46.445 "subtype": "NVMe", 00:14:46.445 "listen_addresses": [ 00:14:46.445 { 00:14:46.445 "trtype": "VFIOUSER", 00:14:46.445 "adrfam": "IPv4", 00:14:46.445 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:46.445 "trsvcid": "0" 00:14:46.445 } 00:14:46.445 ], 00:14:46.445 "allow_any_host": true, 00:14:46.445 "hosts": [], 00:14:46.445 "serial_number": "SPDK2", 00:14:46.445 "model_number": "SPDK bdev Controller", 00:14:46.445 "max_namespaces": 32, 00:14:46.445 "min_cntlid": 1, 00:14:46.445 "max_cntlid": 65519, 00:14:46.445 "namespaces": [ 00:14:46.445 { 00:14:46.445 "nsid": 1, 00:14:46.445 "bdev_name": "Malloc2", 00:14:46.445 "name": "Malloc2", 00:14:46.445 "nguid": "B518B0CD7E834CC491BECF8B6244B40A", 00:14:46.445 "uuid": "b518b0cd-7e83-4cc4-91be-cf8b6244b40a" 00:14:46.445 } 00:14:46.445 ] 00:14:46.445 } 00:14:46.445 ] 00:14:46.445 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2030092 00:14:46.445 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:46.445 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:46.445 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:46.445 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:46.705 [2024-07-24 19:50:38.056675] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:14:46.706 [2024-07-24 19:50:38.056703] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2030236 ] 00:14:46.706 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.706 [2024-07-24 19:50:38.083466] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:46.706 [2024-07-24 19:50:38.094961] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:46.706 [2024-07-24 19:50:38.094981] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb7d64d6000 00:14:46.706 [2024-07-24 19:50:38.095963] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.706 [2024-07-24 19:50:38.096964] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.706 [2024-07-24 19:50:38.097975] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.706 [2024-07-24 19:50:38.098981] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.706 [2024-07-24 19:50:38.099989] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.706 [2024-07-24 19:50:38.100998] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.706 [2024-07-24 19:50:38.102002] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.706 [2024-07-24 19:50:38.103013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.706 [2024-07-24 19:50:38.104021] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:46.706 [2024-07-24 19:50:38.104031] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb7d64cb000 00:14:46.706 [2024-07-24 19:50:38.104971] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:46.706 [2024-07-24 19:50:38.116515] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:46.706 [2024-07-24 19:50:38.116535] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:46.706 [2024-07-24 19:50:38.121614] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:46.706 [2024-07-24 19:50:38.121652] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:46.706 [2024-07-24 19:50:38.121720] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:46.706 [2024-07-24 19:50:38.121734] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:46.706 [2024-07-24 19:50:38.121739] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:46.706 [2024-07-24 19:50:38.122613] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:46.706 [2024-07-24 19:50:38.122625] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:46.706 [2024-07-24 19:50:38.122632] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:46.706 [2024-07-24 19:50:38.123623] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:46.706 [2024-07-24 19:50:38.123634] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:46.706 [2024-07-24 19:50:38.123640] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:46.706 [2024-07-24 19:50:38.124633] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:46.706 [2024-07-24 19:50:38.124642] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:46.706 [2024-07-24 19:50:38.125635] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:46.706 [2024-07-24 19:50:38.125646] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:46.706 [2024-07-24 19:50:38.125652] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:46.706 [2024-07-24 19:50:38.125660] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:46.706 [2024-07-24 19:50:38.125770] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:46.706 [2024-07-24 19:50:38.125775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:46.706 [2024-07-24 19:50:38.125780] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:46.706 [2024-07-24 19:50:38.126643] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:46.706 [2024-07-24 19:50:38.127647] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:46.706 [2024-07-24 19:50:38.128658] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:46.706 [2024-07-24 19:50:38.129658] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:46.706 [2024-07-24 19:50:38.129697] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:46.706 [2024-07-24 19:50:38.130665] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:46.706 [2024-07-24 19:50:38.130675] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:46.706 [2024-07-24 19:50:38.130680] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:46.706 [2024-07-24 19:50:38.130697] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:46.706 [2024-07-24 19:50:38.130707] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:46.706 [2024-07-24 19:50:38.130718] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.706 [2024-07-24 19:50:38.130723] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.706 [2024-07-24 19:50:38.130726] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.706 [2024-07-24 19:50:38.130737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.706 [2024-07-24 19:50:38.138053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:46.706 [2024-07-24 19:50:38.138064] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:46.706 [2024-07-24 19:50:38.138069] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:46.706 [2024-07-24 19:50:38.138072] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:46.706 [2024-07-24 19:50:38.138077] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:46.706 [2024-07-24 19:50:38.138081] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:46.706 [2024-07-24 19:50:38.138085] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:46.706 [2024-07-24 19:50:38.138089] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:46.706 [2024-07-24 19:50:38.138096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:46.706 [2024-07-24 19:50:38.138110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:46.706 [2024-07-24 19:50:38.146050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:46.706 [2024-07-24 19:50:38.146064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.706 [2024-07-24 19:50:38.146072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.706 [2024-07-24 19:50:38.146080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.706 [2024-07-24 19:50:38.146087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.706 [2024-07-24 19:50:38.146091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:46.706 [2024-07-24 19:50:38.146098] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:46.706 [2024-07-24 19:50:38.146107] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:46.706 [2024-07-24 19:50:38.154052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:46.706 [2024-07-24 19:50:38.154061] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:46.706 [2024-07-24 19:50:38.154066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:46.706 [2024-07-24 19:50:38.154073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:46.706 [2024-07-24 19:50:38.154079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:46.706 [2024-07-24 19:50:38.154087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:46.707 [2024-07-24 19:50:38.162051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:46.707 [2024-07-24 19:50:38.162106] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:46.707 [2024-07-24 19:50:38.162114] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:46.707 [2024-07-24 19:50:38.162121] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:46.707 [2024-07-24 19:50:38.162125] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:46.707 [2024-07-24 19:50:38.162128] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.707 [2024-07-24 19:50:38.162134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:46.707 [2024-07-24 19:50:38.170051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:46.707 [2024-07-24 19:50:38.170062] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:46.707 [2024-07-24 19:50:38.170070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:46.707 [2024-07-24 19:50:38.170080] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:46.707 [2024-07-24 19:50:38.170086] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.707 [2024-07-24 19:50:38.170090] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.707 [2024-07-24 19:50:38.170093] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.707 [2024-07-24 19:50:38.170099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.707 [2024-07-24 19:50:38.178049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:46.707 [2024-07-24 19:50:38.178063] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:46.707 [2024-07-24 19:50:38.178070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:46.707 [2024-07-24 19:50:38.178077] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.707 [2024-07-24 19:50:38.178081] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.707 [2024-07-24 19:50:38.178084] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.707 [2024-07-24 19:50:38.178089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.707 [2024-07-24 19:50:38.186049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:46.707 [2024-07-24 19:50:38.186067] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:46.707 [2024-07-24 19:50:38.186073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:46.707 [2024-07-24 19:50:38.186080] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:46.707 [2024-07-24 19:50:38.186087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:46.707 [2024-07-24 19:50:38.186092] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:46.707 [2024-07-24 19:50:38.186097] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:46.707 [2024-07-24 19:50:38.186101] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:46.707 [2024-07-24 19:50:38.186105] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:46.707 [2024-07-24 19:50:38.186110] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:46.707 [2024-07-24 19:50:38.186125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:46.707 [2024-07-24 19:50:38.194048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:46.707 [2024-07-24 19:50:38.194060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:46.707 [2024-07-24 19:50:38.202050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:46.707 [2024-07-24 19:50:38.202065] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:46.707 [2024-07-24 19:50:38.210048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:46.707 [2024-07-24 19:50:38.210060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:46.707 [2024-07-24 19:50:38.218051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:46.707 [2024-07-24 19:50:38.218066] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:46.707 [2024-07-24 19:50:38.218070] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:46.707 [2024-07-24 19:50:38.218073] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:46.707 [2024-07-24 19:50:38.218076] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:46.707 [2024-07-24 19:50:38.218079] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:46.707 [2024-07-24 19:50:38.218085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:46.707 [2024-07-24 19:50:38.218091] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:46.707 [2024-07-24 19:50:38.218095] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:46.707 [2024-07-24 19:50:38.218098] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.707 [2024-07-24 19:50:38.218103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:46.707 [2024-07-24 19:50:38.218110] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:46.707 [2024-07-24 19:50:38.218113] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.707 [2024-07-24 19:50:38.218116] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.707 [2024-07-24 19:50:38.218121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.707 [2024-07-24 19:50:38.218128] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:46.707 [2024-07-24 19:50:38.218132] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:46.707 [2024-07-24 19:50:38.218135] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.707 [2024-07-24 19:50:38.218140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:46.707 [2024-07-24 19:50:38.226050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:46.707 [2024-07-24 19:50:38.226063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:46.707 [2024-07-24 19:50:38.226073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:46.707 [2024-07-24 19:50:38.226079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:46.707 ===================================================== 00:14:46.707 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:46.707 ===================================================== 00:14:46.707 Controller Capabilities/Features 00:14:46.707 ================================ 00:14:46.707 Vendor ID: 4e58 00:14:46.707 Subsystem Vendor ID: 4e58 00:14:46.707 Serial Number: SPDK2 00:14:46.707 Model Number: SPDK bdev Controller 00:14:46.707 Firmware Version: 24.09 00:14:46.707 Recommended Arb Burst: 6 00:14:46.707 IEEE OUI Identifier: 8d 6b 50 00:14:46.707 Multi-path I/O 00:14:46.707 May have multiple subsystem ports: Yes 00:14:46.707 May have multiple controllers: Yes 00:14:46.707 Associated with SR-IOV VF: No 00:14:46.707 Max Data Transfer Size: 131072 00:14:46.707 Max Number of Namespaces: 32 00:14:46.707 Max Number of I/O Queues: 127 00:14:46.707 NVMe Specification Version (VS): 1.3 00:14:46.707 NVMe Specification Version (Identify): 1.3 00:14:46.707 Maximum Queue Entries: 256 00:14:46.707 Contiguous Queues Required: Yes 00:14:46.707 Arbitration Mechanisms Supported 00:14:46.707 Weighted Round Robin: Not Supported 00:14:46.707 Vendor Specific: Not Supported 00:14:46.707 Reset Timeout: 15000 ms 00:14:46.707 Doorbell Stride: 4 bytes 00:14:46.707 NVM Subsystem Reset: Not Supported 00:14:46.707 Command Sets Supported 00:14:46.707 NVM Command Set: Supported 00:14:46.707 Boot Partition: Not Supported 00:14:46.707 Memory Page Size Minimum: 4096 bytes 00:14:46.707 Memory Page Size Maximum: 4096 bytes 00:14:46.707 Persistent Memory Region: Not Supported 00:14:46.707 Optional Asynchronous Events Supported 00:14:46.707 Namespace Attribute Notices: Supported 00:14:46.707 Firmware Activation Notices: Not Supported 00:14:46.707 ANA Change Notices: Not Supported 00:14:46.707 PLE Aggregate Log Change Notices: Not Supported 00:14:46.707 LBA Status Info Alert Notices: Not Supported 00:14:46.707 EGE Aggregate Log Change Notices: Not Supported 00:14:46.708 Normal NVM Subsystem Shutdown event: Not Supported 00:14:46.708 Zone Descriptor Change Notices: Not Supported 00:14:46.708 Discovery Log Change Notices: Not Supported 00:14:46.708 Controller Attributes 00:14:46.708 128-bit Host Identifier: Supported 00:14:46.708 Non-Operational Permissive Mode: Not Supported 00:14:46.708 NVM Sets: Not Supported 00:14:46.708 Read Recovery Levels: Not Supported 00:14:46.708 Endurance Groups: Not Supported 00:14:46.708 Predictable Latency Mode: Not Supported 00:14:46.708 Traffic Based Keep ALive: Not Supported 00:14:46.708 Namespace Granularity: Not Supported 00:14:46.708 SQ Associations: Not Supported 00:14:46.708 UUID List: Not Supported 00:14:46.708 Multi-Domain Subsystem: Not Supported 00:14:46.708 Fixed Capacity Management: Not Supported 00:14:46.708 Variable Capacity Management: Not Supported 00:14:46.708 Delete Endurance Group: Not Supported 00:14:46.708 Delete NVM Set: Not Supported 00:14:46.708 Extended LBA Formats Supported: Not Supported 00:14:46.708 Flexible Data Placement Supported: Not Supported 00:14:46.708 00:14:46.708 Controller Memory Buffer Support 00:14:46.708 ================================ 00:14:46.708 Supported: No 00:14:46.708 00:14:46.708 Persistent Memory Region Support 00:14:46.708 ================================ 00:14:46.708 Supported: No 00:14:46.708 00:14:46.708 Admin Command Set Attributes 00:14:46.708 ============================ 00:14:46.708 Security Send/Receive: Not Supported 00:14:46.708 Format NVM: Not Supported 00:14:46.708 Firmware Activate/Download: Not Supported 00:14:46.708 Namespace Management: Not Supported 00:14:46.708 Device Self-Test: Not Supported 00:14:46.708 Directives: Not Supported 00:14:46.708 NVMe-MI: Not Supported 00:14:46.708 Virtualization Management: Not Supported 00:14:46.708 Doorbell Buffer Config: Not Supported 00:14:46.708 Get LBA Status Capability: Not Supported 00:14:46.708 Command & Feature Lockdown Capability: Not Supported 00:14:46.708 Abort Command Limit: 4 00:14:46.708 Async Event Request Limit: 4 00:14:46.708 Number of Firmware Slots: N/A 00:14:46.708 Firmware Slot 1 Read-Only: N/A 00:14:46.708 Firmware Activation Without Reset: N/A 00:14:46.708 Multiple Update Detection Support: N/A 00:14:46.708 Firmware Update Granularity: No Information Provided 00:14:46.708 Per-Namespace SMART Log: No 00:14:46.708 Asymmetric Namespace Access Log Page: Not Supported 00:14:46.708 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:46.708 Command Effects Log Page: Supported 00:14:46.708 Get Log Page Extended Data: Supported 00:14:46.708 Telemetry Log Pages: Not Supported 00:14:46.708 Persistent Event Log Pages: Not Supported 00:14:46.708 Supported Log Pages Log Page: May Support 00:14:46.708 Commands Supported & Effects Log Page: Not Supported 00:14:46.708 Feature Identifiers & Effects Log Page:May Support 00:14:46.708 NVMe-MI Commands & Effects Log Page: May Support 00:14:46.708 Data Area 4 for Telemetry Log: Not Supported 00:14:46.708 Error Log Page Entries Supported: 128 00:14:46.708 Keep Alive: Supported 00:14:46.708 Keep Alive Granularity: 10000 ms 00:14:46.708 00:14:46.708 NVM Command Set Attributes 00:14:46.708 ========================== 00:14:46.708 Submission Queue Entry Size 00:14:46.708 Max: 64 00:14:46.708 Min: 64 00:14:46.708 Completion Queue Entry Size 00:14:46.708 Max: 16 00:14:46.708 Min: 16 00:14:46.708 Number of Namespaces: 32 00:14:46.708 Compare Command: Supported 00:14:46.708 Write Uncorrectable Command: Not Supported 00:14:46.708 Dataset Management Command: Supported 00:14:46.708 Write Zeroes Command: Supported 00:14:46.708 Set Features Save Field: Not Supported 00:14:46.708 Reservations: Not Supported 00:14:46.708 Timestamp: Not Supported 00:14:46.708 Copy: Supported 00:14:46.708 Volatile Write Cache: Present 00:14:46.708 Atomic Write Unit (Normal): 1 00:14:46.708 Atomic Write Unit (PFail): 1 00:14:46.708 Atomic Compare & Write Unit: 1 00:14:46.708 Fused Compare & Write: Supported 00:14:46.708 Scatter-Gather List 00:14:46.708 SGL Command Set: Supported (Dword aligned) 00:14:46.708 SGL Keyed: Not Supported 00:14:46.708 SGL Bit Bucket Descriptor: Not Supported 00:14:46.708 SGL Metadata Pointer: Not Supported 00:14:46.708 Oversized SGL: Not Supported 00:14:46.708 SGL Metadata Address: Not Supported 00:14:46.708 SGL Offset: Not Supported 00:14:46.708 Transport SGL Data Block: Not Supported 00:14:46.708 Replay Protected Memory Block: Not Supported 00:14:46.708 00:14:46.708 Firmware Slot Information 00:14:46.708 ========================= 00:14:46.708 Active slot: 1 00:14:46.708 Slot 1 Firmware Revision: 24.09 00:14:46.708 00:14:46.708 00:14:46.708 Commands Supported and Effects 00:14:46.708 ============================== 00:14:46.708 Admin Commands 00:14:46.708 -------------- 00:14:46.708 Get Log Page (02h): Supported 00:14:46.708 Identify (06h): Supported 00:14:46.708 Abort (08h): Supported 00:14:46.708 Set Features (09h): Supported 00:14:46.708 Get Features (0Ah): Supported 00:14:46.708 Asynchronous Event Request (0Ch): Supported 00:14:46.708 Keep Alive (18h): Supported 00:14:46.708 I/O Commands 00:14:46.708 ------------ 00:14:46.708 Flush (00h): Supported LBA-Change 00:14:46.708 Write (01h): Supported LBA-Change 00:14:46.708 Read (02h): Supported 00:14:46.708 Compare (05h): Supported 00:14:46.708 Write Zeroes (08h): Supported LBA-Change 00:14:46.708 Dataset Management (09h): Supported LBA-Change 00:14:46.708 Copy (19h): Supported LBA-Change 00:14:46.708 00:14:46.708 Error Log 00:14:46.708 ========= 00:14:46.708 00:14:46.708 Arbitration 00:14:46.708 =========== 00:14:46.708 Arbitration Burst: 1 00:14:46.708 00:14:46.708 Power Management 00:14:46.708 ================ 00:14:46.708 Number of Power States: 1 00:14:46.708 Current Power State: Power State #0 00:14:46.708 Power State #0: 00:14:46.708 Max Power: 0.00 W 00:14:46.708 Non-Operational State: Operational 00:14:46.708 Entry Latency: Not Reported 00:14:46.708 Exit Latency: Not Reported 00:14:46.708 Relative Read Throughput: 0 00:14:46.708 Relative Read Latency: 0 00:14:46.708 Relative Write Throughput: 0 00:14:46.708 Relative Write Latency: 0 00:14:46.708 Idle Power: Not Reported 00:14:46.708 Active Power: Not Reported 00:14:46.708 Non-Operational Permissive Mode: Not Supported 00:14:46.708 00:14:46.708 Health Information 00:14:46.708 ================== 00:14:46.708 Critical Warnings: 00:14:46.708 Available Spare Space: OK 00:14:46.708 Temperature: OK 00:14:46.708 Device Reliability: OK 00:14:46.708 Read Only: No 00:14:46.708 Volatile Memory Backup: OK 00:14:46.708 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:46.708 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:46.708 Available Spare: 0% 00:14:46.708 Available Sp[2024-07-24 19:50:38.226165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:46.708 [2024-07-24 19:50:38.234050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:46.708 [2024-07-24 19:50:38.234079] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:46.708 [2024-07-24 19:50:38.234088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.708 [2024-07-24 19:50:38.234094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.708 [2024-07-24 19:50:38.234100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.708 [2024-07-24 19:50:38.234105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.708 [2024-07-24 19:50:38.234142] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:46.708 [2024-07-24 19:50:38.234152] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:46.708 [2024-07-24 19:50:38.235152] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:46.708 [2024-07-24 19:50:38.235194] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:46.708 [2024-07-24 19:50:38.235200] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:46.708 [2024-07-24 19:50:38.236159] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:46.708 [2024-07-24 19:50:38.236170] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:46.708 [2024-07-24 19:50:38.236215] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:46.709 [2024-07-24 19:50:38.237192] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:46.709 are Threshold: 0% 00:14:46.709 Life Percentage Used: 0% 00:14:46.709 Data Units Read: 0 00:14:46.709 Data Units Written: 0 00:14:46.709 Host Read Commands: 0 00:14:46.709 Host Write Commands: 0 00:14:46.709 Controller Busy Time: 0 minutes 00:14:46.709 Power Cycles: 0 00:14:46.709 Power On Hours: 0 hours 00:14:46.709 Unsafe Shutdowns: 0 00:14:46.709 Unrecoverable Media Errors: 0 00:14:46.709 Lifetime Error Log Entries: 0 00:14:46.709 Warning Temperature Time: 0 minutes 00:14:46.709 Critical Temperature Time: 0 minutes 00:14:46.709 00:14:46.709 Number of Queues 00:14:46.709 ================ 00:14:46.709 Number of I/O Submission Queues: 127 00:14:46.709 Number of I/O Completion Queues: 127 00:14:46.709 00:14:46.709 Active Namespaces 00:14:46.709 ================= 00:14:46.709 Namespace ID:1 00:14:46.709 Error Recovery Timeout: Unlimited 00:14:46.709 Command Set Identifier: NVM (00h) 00:14:46.709 Deallocate: Supported 00:14:46.709 Deallocated/Unwritten Error: Not Supported 00:14:46.709 Deallocated Read Value: Unknown 00:14:46.709 Deallocate in Write Zeroes: Not Supported 00:14:46.709 Deallocated Guard Field: 0xFFFF 00:14:46.709 Flush: Supported 00:14:46.709 Reservation: Supported 00:14:46.709 Namespace Sharing Capabilities: Multiple Controllers 00:14:46.709 Size (in LBAs): 131072 (0GiB) 00:14:46.709 Capacity (in LBAs): 131072 (0GiB) 00:14:46.709 Utilization (in LBAs): 131072 (0GiB) 00:14:46.709 NGUID: B518B0CD7E834CC491BECF8B6244B40A 00:14:46.709 UUID: b518b0cd-7e83-4cc4-91be-cf8b6244b40a 00:14:46.709 Thin Provisioning: Not Supported 00:14:46.709 Per-NS Atomic Units: Yes 00:14:46.709 Atomic Boundary Size (Normal): 0 00:14:46.709 Atomic Boundary Size (PFail): 0 00:14:46.709 Atomic Boundary Offset: 0 00:14:46.709 Maximum Single Source Range Length: 65535 00:14:46.709 Maximum Copy Length: 65535 00:14:46.709 Maximum Source Range Count: 1 00:14:46.709 NGUID/EUI64 Never Reused: No 00:14:46.709 Namespace Write Protected: No 00:14:46.709 Number of LBA Formats: 1 00:14:46.709 Current LBA Format: LBA Format #00 00:14:46.709 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:46.709 00:14:46.709 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:46.968 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.968 [2024-07-24 19:50:38.450367] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:52.276 Initializing NVMe Controllers 00:14:52.276 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:52.276 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:52.276 Initialization complete. Launching workers. 00:14:52.276 ======================================================== 00:14:52.276 Latency(us) 00:14:52.276 Device Information : IOPS MiB/s Average min max 00:14:52.276 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39882.80 155.79 3209.23 964.43 10323.89 00:14:52.276 ======================================================== 00:14:52.276 Total : 39882.80 155.79 3209.23 964.43 10323.89 00:14:52.276 00:14:52.276 [2024-07-24 19:50:43.558295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:52.276 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:52.276 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.276 [2024-07-24 19:50:43.776971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:57.568 Initializing NVMe Controllers 00:14:57.568 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:57.568 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:57.568 Initialization complete. Launching workers. 00:14:57.568 ======================================================== 00:14:57.568 Latency(us) 00:14:57.568 Device Information : IOPS MiB/s Average min max 00:14:57.568 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39839.02 155.62 3212.76 996.53 11203.21 00:14:57.568 ======================================================== 00:14:57.568 Total : 39839.02 155.62 3212.76 996.53 11203.21 00:14:57.568 00:14:57.568 [2024-07-24 19:50:48.797244] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:57.568 19:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:57.568 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.568 [2024-07-24 19:50:48.984619] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:02.842 [2024-07-24 19:50:54.125137] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:02.842 Initializing NVMe Controllers 00:15:02.842 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:02.842 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:02.842 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:02.842 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:02.842 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:02.842 Initialization complete. Launching workers. 00:15:02.842 Starting thread on core 2 00:15:02.842 Starting thread on core 3 00:15:02.842 Starting thread on core 1 00:15:02.842 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:02.842 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.842 [2024-07-24 19:50:54.407479] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:06.132 [2024-07-24 19:50:57.479394] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:06.132 Initializing NVMe Controllers 00:15:06.132 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:06.132 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:06.132 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:06.132 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:06.132 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:06.133 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:06.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:06.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:06.133 Initialization complete. Launching workers. 00:15:06.133 Starting thread on core 1 with urgent priority queue 00:15:06.133 Starting thread on core 2 with urgent priority queue 00:15:06.133 Starting thread on core 3 with urgent priority queue 00:15:06.133 Starting thread on core 0 with urgent priority queue 00:15:06.133 SPDK bdev Controller (SPDK2 ) core 0: 9792.67 IO/s 10.21 secs/100000 ios 00:15:06.133 SPDK bdev Controller (SPDK2 ) core 1: 7866.00 IO/s 12.71 secs/100000 ios 00:15:06.133 SPDK bdev Controller (SPDK2 ) core 2: 7891.33 IO/s 12.67 secs/100000 ios 00:15:06.133 SPDK bdev Controller (SPDK2 ) core 3: 8076.33 IO/s 12.38 secs/100000 ios 00:15:06.133 ======================================================== 00:15:06.133 00:15:06.133 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:06.133 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.391 [2024-07-24 19:50:57.755508] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:06.391 Initializing NVMe Controllers 00:15:06.391 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:06.391 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:06.391 Namespace ID: 1 size: 0GB 00:15:06.391 Initialization complete. 00:15:06.391 INFO: using host memory buffer for IO 00:15:06.391 Hello world! 00:15:06.391 [2024-07-24 19:50:57.765567] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:06.391 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:06.391 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.650 [2024-07-24 19:50:58.041101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:07.587 Initializing NVMe Controllers 00:15:07.587 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:07.587 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:07.587 Initialization complete. Launching workers. 00:15:07.587 submit (in ns) avg, min, max = 7219.4, 3290.4, 4002185.2 00:15:07.587 complete (in ns) avg, min, max = 20878.9, 1788.7, 3999660.9 00:15:07.587 00:15:07.587 Submit histogram 00:15:07.587 ================ 00:15:07.587 Range in us Cumulative Count 00:15:07.587 3.283 - 3.297: 0.0186% ( 3) 00:15:07.587 3.297 - 3.311: 0.2479% ( 37) 00:15:07.587 3.311 - 3.325: 1.0041% ( 122) 00:15:07.587 3.325 - 3.339: 2.3057% ( 210) 00:15:07.587 3.339 - 3.353: 3.9606% ( 267) 00:15:07.587 3.353 - 3.367: 7.1278% ( 511) 00:15:07.587 3.367 - 3.381: 12.1359% ( 808) 00:15:07.587 3.381 - 3.395: 18.0179% ( 949) 00:15:07.587 3.395 - 3.409: 24.1478% ( 989) 00:15:07.587 3.409 - 3.423: 29.8996% ( 928) 00:15:07.587 3.423 - 3.437: 35.1990% ( 855) 00:15:07.587 3.437 - 3.450: 39.9219% ( 762) 00:15:07.587 3.450 - 3.464: 45.3452% ( 875) 00:15:07.587 3.464 - 3.478: 50.2479% ( 791) 00:15:07.587 3.478 - 3.492: 54.6548% ( 711) 00:15:07.587 3.492 - 3.506: 58.9129% ( 687) 00:15:07.587 3.506 - 3.520: 64.6771% ( 930) 00:15:07.587 3.520 - 3.534: 70.8566% ( 997) 00:15:07.587 3.534 - 3.548: 75.0899% ( 683) 00:15:07.587 3.548 - 3.562: 78.9885% ( 629) 00:15:07.587 3.562 - 3.590: 85.2299% ( 1007) 00:15:07.587 3.590 - 3.617: 87.5542% ( 375) 00:15:07.587 3.617 - 3.645: 88.4220% ( 140) 00:15:07.587 3.645 - 3.673: 89.6554% ( 199) 00:15:07.587 3.673 - 3.701: 91.4156% ( 284) 00:15:07.587 3.701 - 3.729: 93.0767% ( 268) 00:15:07.587 3.729 - 3.757: 94.5953% ( 245) 00:15:07.587 3.757 - 3.784: 96.1634% ( 253) 00:15:07.587 3.784 - 3.812: 97.6695% ( 243) 00:15:07.587 3.812 - 3.840: 98.5496% ( 142) 00:15:07.587 3.840 - 3.868: 99.0765% ( 85) 00:15:07.587 3.868 - 3.896: 99.3244% ( 40) 00:15:07.587 3.896 - 3.923: 99.5104% ( 30) 00:15:07.587 3.923 - 3.951: 99.5785% ( 11) 00:15:07.587 3.951 - 3.979: 99.5971% ( 3) 00:15:07.587 5.259 - 5.287: 99.6033% ( 1) 00:15:07.587 5.704 - 5.732: 99.6095% ( 1) 00:15:07.587 5.788 - 5.816: 99.6157% ( 1) 00:15:07.587 5.927 - 5.955: 99.6219% ( 1) 00:15:07.587 5.983 - 6.010: 99.6281% ( 1) 00:15:07.587 6.177 - 6.205: 99.6343% ( 1) 00:15:07.587 6.233 - 6.261: 99.6405% ( 1) 00:15:07.587 6.261 - 6.289: 99.6591% ( 3) 00:15:07.587 6.289 - 6.317: 99.6653% ( 1) 00:15:07.587 6.317 - 6.344: 99.6715% ( 1) 00:15:07.587 6.344 - 6.372: 99.6777% ( 1) 00:15:07.587 6.400 - 6.428: 99.6839% ( 1) 00:15:07.587 6.456 - 6.483: 99.6901% ( 1) 00:15:07.587 6.511 - 6.539: 99.6963% ( 1) 00:15:07.587 6.539 - 6.567: 99.7087% ( 2) 00:15:07.587 6.567 - 6.595: 99.7149% ( 1) 00:15:07.587 6.678 - 6.706: 99.7211% ( 1) 00:15:07.588 6.706 - 6.734: 99.7273% ( 1) 00:15:07.588 6.734 - 6.762: 99.7335% ( 1) 00:15:07.588 6.762 - 6.790: 99.7397% ( 1) 00:15:07.588 6.845 - 6.873: 99.7459% ( 1) 00:15:07.588 6.873 - 6.901: 99.7521% ( 1) 00:15:07.588 6.929 - 6.957: 99.7583% ( 1) 00:15:07.588 6.957 - 6.984: 99.7707% ( 2) 00:15:07.588 6.984 - 7.012: 99.7769% ( 1) 00:15:07.588 7.096 - 7.123: 99.7831% ( 1) 00:15:07.588 7.123 - 7.179: 99.7955% ( 2) 00:15:07.588 7.235 - 7.290: 99.8203% ( 4) 00:15:07.588 7.290 - 7.346: 99.8265% ( 1) 00:15:07.588 7.346 - 7.402: 99.8388% ( 2) 00:15:07.588 7.457 - 7.513: 99.8574% ( 3) 00:15:07.588 7.791 - 7.847: 99.8636% ( 1) 00:15:07.588 8.014 - 8.070: 99.8884% ( 4) 00:15:07.588 8.125 - 8.181: 99.8946% ( 1) 00:15:07.588 8.348 - 8.403: 99.9008% ( 1) 00:15:07.588 12.355 - 12.410: 99.9070% ( 1) 00:15:07.588 3960.654 - 3989.148: 99.9132% ( 1) 00:15:07.588 3989.148 - 4017.642: 100.0000% ( 14) 00:15:07.588 00:15:07.588 Complete histogram 00:15:07.588 ================== 00:15:07.588 Range in us Cumulative Count 00:15:07.588 1.781 - 1.795: 0.0062% ( 1) 00:15:07.588 1.795 - 1.809: 0.7686% ( 123) 00:15:07.588 1.809 - 1.823: 21.9041% ( 3410) 00:15:07.588 1.823 - 1.837: 51.9772% ( 4852) 00:15:07.588 1.837 - [2024-07-24 19:50:59.143126] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:07.588 1.850: 57.5865% ( 905) 00:15:07.588 1.850 - 1.864: 68.5075% ( 1762) 00:15:07.588 1.864 - 1.878: 88.6885% ( 3256) 00:15:07.588 1.878 - 1.892: 93.9879% ( 855) 00:15:07.588 1.892 - 1.906: 96.4795% ( 402) 00:15:07.588 1.906 - 1.920: 97.8307% ( 218) 00:15:07.588 1.920 - 1.934: 98.3265% ( 80) 00:15:07.588 1.934 - 1.948: 98.7728% ( 72) 00:15:07.588 1.948 - 1.962: 99.0083% ( 38) 00:15:07.588 1.962 - 1.976: 99.1323% ( 20) 00:15:07.588 1.976 - 1.990: 99.1695% ( 6) 00:15:07.588 1.990 - 2.003: 99.1942% ( 4) 00:15:07.588 2.003 - 2.017: 99.2190% ( 4) 00:15:07.588 2.017 - 2.031: 99.2438% ( 4) 00:15:07.588 2.031 - 2.045: 99.2686% ( 4) 00:15:07.588 2.045 - 2.059: 99.2748% ( 1) 00:15:07.588 2.059 - 2.073: 99.2810% ( 1) 00:15:07.588 2.073 - 2.087: 99.2934% ( 2) 00:15:07.588 2.087 - 2.101: 99.2996% ( 1) 00:15:07.588 2.115 - 2.129: 99.3058% ( 1) 00:15:07.588 3.979 - 4.007: 99.3120% ( 1) 00:15:07.588 4.007 - 4.035: 99.3182% ( 1) 00:15:07.588 4.230 - 4.257: 99.3244% ( 1) 00:15:07.588 4.563 - 4.591: 99.3306% ( 1) 00:15:07.588 4.619 - 4.647: 99.3368% ( 1) 00:15:07.588 4.647 - 4.675: 99.3430% ( 1) 00:15:07.588 4.675 - 4.703: 99.3492% ( 1) 00:15:07.588 4.703 - 4.730: 99.3554% ( 1) 00:15:07.588 4.758 - 4.786: 99.3678% ( 2) 00:15:07.588 4.842 - 4.870: 99.3740% ( 1) 00:15:07.588 5.009 - 5.037: 99.3802% ( 1) 00:15:07.588 5.120 - 5.148: 99.3864% ( 1) 00:15:07.588 5.176 - 5.203: 99.3926% ( 1) 00:15:07.588 5.287 - 5.315: 99.3988% ( 1) 00:15:07.588 5.426 - 5.454: 99.4050% ( 1) 00:15:07.588 5.510 - 5.537: 99.4112% ( 1) 00:15:07.588 5.621 - 5.649: 99.4174% ( 1) 00:15:07.588 5.677 - 5.704: 99.4236% ( 1) 00:15:07.588 5.788 - 5.816: 99.4298% ( 1) 00:15:07.588 5.899 - 5.927: 99.4360% ( 1) 00:15:07.588 5.927 - 5.955: 99.4422% ( 1) 00:15:07.588 6.122 - 6.150: 99.4484% ( 1) 00:15:07.588 6.233 - 6.261: 99.4546% ( 1) 00:15:07.588 6.289 - 6.317: 99.4608% ( 1) 00:15:07.588 6.678 - 6.706: 99.4732% ( 2) 00:15:07.588 6.734 - 6.762: 99.4794% ( 1) 00:15:07.588 6.845 - 6.873: 99.4918% ( 2) 00:15:07.588 7.402 - 7.457: 99.4980% ( 1) 00:15:07.588 7.624 - 7.680: 99.5042% ( 1) 00:15:07.588 8.014 - 8.070: 99.5104% ( 1) 00:15:07.588 9.405 - 9.461: 99.5165% ( 1) 00:15:07.588 38.066 - 38.289: 99.5227% ( 1) 00:15:07.588 3305.294 - 3319.541: 99.5289% ( 1) 00:15:07.588 3989.148 - 4017.642: 100.0000% ( 76) 00:15:07.588 00:15:07.588 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:07.588 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:07.588 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:07.588 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:07.588 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:07.847 [ 00:15:07.847 { 00:15:07.847 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:07.847 "subtype": "Discovery", 00:15:07.847 "listen_addresses": [], 00:15:07.848 "allow_any_host": true, 00:15:07.848 "hosts": [] 00:15:07.848 }, 00:15:07.848 { 00:15:07.848 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:07.848 "subtype": "NVMe", 00:15:07.848 "listen_addresses": [ 00:15:07.848 { 00:15:07.848 "trtype": "VFIOUSER", 00:15:07.848 "adrfam": "IPv4", 00:15:07.848 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:07.848 "trsvcid": "0" 00:15:07.848 } 00:15:07.848 ], 00:15:07.848 "allow_any_host": true, 00:15:07.848 "hosts": [], 00:15:07.848 "serial_number": "SPDK1", 00:15:07.848 "model_number": "SPDK bdev Controller", 00:15:07.848 "max_namespaces": 32, 00:15:07.848 "min_cntlid": 1, 00:15:07.848 "max_cntlid": 65519, 00:15:07.848 "namespaces": [ 00:15:07.848 { 00:15:07.848 "nsid": 1, 00:15:07.848 "bdev_name": "Malloc1", 00:15:07.848 "name": "Malloc1", 00:15:07.848 "nguid": "462FBF8AFB8946C988A59D7E956A9134", 00:15:07.848 "uuid": "462fbf8a-fb89-46c9-88a5-9d7e956a9134" 00:15:07.848 }, 00:15:07.848 { 00:15:07.848 "nsid": 2, 00:15:07.848 "bdev_name": "Malloc3", 00:15:07.848 "name": "Malloc3", 00:15:07.848 "nguid": "B6325A0E85ED4561A96B41E002437CF7", 00:15:07.848 "uuid": "b6325a0e-85ed-4561-a96b-41e002437cf7" 00:15:07.848 } 00:15:07.848 ] 00:15:07.848 }, 00:15:07.848 { 00:15:07.848 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:07.848 "subtype": "NVMe", 00:15:07.848 "listen_addresses": [ 00:15:07.848 { 00:15:07.848 "trtype": "VFIOUSER", 00:15:07.848 "adrfam": "IPv4", 00:15:07.848 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:07.848 "trsvcid": "0" 00:15:07.848 } 00:15:07.848 ], 00:15:07.848 "allow_any_host": true, 00:15:07.848 "hosts": [], 00:15:07.848 "serial_number": "SPDK2", 00:15:07.848 "model_number": "SPDK bdev Controller", 00:15:07.848 "max_namespaces": 32, 00:15:07.848 "min_cntlid": 1, 00:15:07.848 "max_cntlid": 65519, 00:15:07.848 "namespaces": [ 00:15:07.848 { 00:15:07.848 "nsid": 1, 00:15:07.848 "bdev_name": "Malloc2", 00:15:07.848 "name": "Malloc2", 00:15:07.848 "nguid": "B518B0CD7E834CC491BECF8B6244B40A", 00:15:07.848 "uuid": "b518b0cd-7e83-4cc4-91be-cf8b6244b40a" 00:15:07.848 } 00:15:07.848 ] 00:15:07.848 } 00:15:07.848 ] 00:15:07.848 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:07.848 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2033771 00:15:07.848 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:07.848 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:07.848 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:07.848 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:07.848 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:07.848 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:07.848 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:07.848 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:07.848 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.107 [2024-07-24 19:50:59.521506] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:08.107 Malloc4 00:15:08.107 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:08.366 [2024-07-24 19:50:59.748128] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:08.366 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:08.366 Asynchronous Event Request test 00:15:08.366 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:08.366 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:08.366 Registering asynchronous event callbacks... 00:15:08.366 Starting namespace attribute notice tests for all controllers... 00:15:08.366 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:08.366 aer_cb - Changed Namespace 00:15:08.366 Cleaning up... 00:15:08.366 [ 00:15:08.366 { 00:15:08.366 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:08.366 "subtype": "Discovery", 00:15:08.366 "listen_addresses": [], 00:15:08.366 "allow_any_host": true, 00:15:08.366 "hosts": [] 00:15:08.366 }, 00:15:08.366 { 00:15:08.366 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:08.366 "subtype": "NVMe", 00:15:08.366 "listen_addresses": [ 00:15:08.366 { 00:15:08.366 "trtype": "VFIOUSER", 00:15:08.366 "adrfam": "IPv4", 00:15:08.366 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:08.366 "trsvcid": "0" 00:15:08.366 } 00:15:08.366 ], 00:15:08.366 "allow_any_host": true, 00:15:08.366 "hosts": [], 00:15:08.366 "serial_number": "SPDK1", 00:15:08.366 "model_number": "SPDK bdev Controller", 00:15:08.366 "max_namespaces": 32, 00:15:08.366 "min_cntlid": 1, 00:15:08.366 "max_cntlid": 65519, 00:15:08.366 "namespaces": [ 00:15:08.366 { 00:15:08.366 "nsid": 1, 00:15:08.366 "bdev_name": "Malloc1", 00:15:08.366 "name": "Malloc1", 00:15:08.366 "nguid": "462FBF8AFB8946C988A59D7E956A9134", 00:15:08.366 "uuid": "462fbf8a-fb89-46c9-88a5-9d7e956a9134" 00:15:08.366 }, 00:15:08.366 { 00:15:08.366 "nsid": 2, 00:15:08.366 "bdev_name": "Malloc3", 00:15:08.366 "name": "Malloc3", 00:15:08.366 "nguid": "B6325A0E85ED4561A96B41E002437CF7", 00:15:08.366 "uuid": "b6325a0e-85ed-4561-a96b-41e002437cf7" 00:15:08.366 } 00:15:08.366 ] 00:15:08.366 }, 00:15:08.366 { 00:15:08.366 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:08.366 "subtype": "NVMe", 00:15:08.366 "listen_addresses": [ 00:15:08.366 { 00:15:08.366 "trtype": "VFIOUSER", 00:15:08.366 "adrfam": "IPv4", 00:15:08.366 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:08.366 "trsvcid": "0" 00:15:08.366 } 00:15:08.366 ], 00:15:08.366 "allow_any_host": true, 00:15:08.366 "hosts": [], 00:15:08.366 "serial_number": "SPDK2", 00:15:08.366 "model_number": "SPDK bdev Controller", 00:15:08.366 "max_namespaces": 32, 00:15:08.366 "min_cntlid": 1, 00:15:08.366 "max_cntlid": 65519, 00:15:08.366 "namespaces": [ 00:15:08.366 { 00:15:08.366 "nsid": 1, 00:15:08.366 "bdev_name": "Malloc2", 00:15:08.366 "name": "Malloc2", 00:15:08.366 "nguid": "B518B0CD7E834CC491BECF8B6244B40A", 00:15:08.366 "uuid": "b518b0cd-7e83-4cc4-91be-cf8b6244b40a" 00:15:08.366 }, 00:15:08.366 { 00:15:08.366 "nsid": 2, 00:15:08.366 "bdev_name": "Malloc4", 00:15:08.366 "name": "Malloc4", 00:15:08.366 "nguid": "2285E0A65E8A4BFA9B80BB8194594EB2", 00:15:08.366 "uuid": "2285e0a6-5e8a-4bfa-9b80-bb8194594eb2" 00:15:08.366 } 00:15:08.366 ] 00:15:08.366 } 00:15:08.366 ] 00:15:08.366 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2033771 00:15:08.366 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:08.366 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2026121 00:15:08.366 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2026121 ']' 00:15:08.366 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2026121 00:15:08.366 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:08.366 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:08.366 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2026121 00:15:08.625 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:08.625 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:08.625 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2026121' 00:15:08.625 killing process with pid 2026121 00:15:08.625 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2026121 00:15:08.625 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2026121 00:15:08.884 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:08.884 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:08.884 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:08.884 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:08.884 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:08.884 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:08.884 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2033821 00:15:08.884 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2033821' 00:15:08.884 Process pid: 2033821 00:15:08.884 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:08.884 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2033821 00:15:08.884 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2033821 ']' 00:15:08.884 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.884 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:08.884 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.884 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:08.884 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:08.884 [2024-07-24 19:51:00.293188] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:08.884 [2024-07-24 19:51:00.294080] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:15:08.884 [2024-07-24 19:51:00.294116] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.884 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.884 [2024-07-24 19:51:00.348960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:08.884 [2024-07-24 19:51:00.423843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.884 [2024-07-24 19:51:00.423886] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.884 [2024-07-24 19:51:00.423893] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.884 [2024-07-24 19:51:00.423899] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.884 [2024-07-24 19:51:00.423904] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.884 [2024-07-24 19:51:00.423966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.884 [2024-07-24 19:51:00.423988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.884 [2024-07-24 19:51:00.424077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:08.884 [2024-07-24 19:51:00.424079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.144 [2024-07-24 19:51:00.510333] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:09.144 [2024-07-24 19:51:00.510430] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:09.144 [2024-07-24 19:51:00.510667] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:09.144 [2024-07-24 19:51:00.511000] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:09.144 [2024-07-24 19:51:00.511246] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:09.144 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.144 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:09.144 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:10.080 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:10.340 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:10.340 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:10.340 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:10.340 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:10.340 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:10.340 Malloc1 00:15:10.601 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:10.601 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:10.860 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:11.119 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:11.119 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:11.119 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:11.119 Malloc2 00:15:11.119 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:11.377 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:11.636 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:11.636 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:11.636 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2033821 00:15:11.901 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2033821 ']' 00:15:11.901 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2033821 00:15:11.901 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:11.901 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:11.901 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2033821 00:15:11.901 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:11.901 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:11.901 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2033821' 00:15:11.901 killing process with pid 2033821 00:15:11.901 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2033821 00:15:11.901 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2033821 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:12.168 00:15:12.168 real 0m50.767s 00:15:12.168 user 3m20.837s 00:15:12.168 sys 0m3.459s 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:12.168 ************************************ 00:15:12.168 END TEST nvmf_vfio_user 00:15:12.168 ************************************ 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:12.168 ************************************ 00:15:12.168 START TEST nvmf_vfio_user_nvme_compliance 00:15:12.168 ************************************ 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:12.168 * Looking for test storage... 00:15:12.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2034684 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2034684' 00:15:12.168 Process pid: 2034684 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2034684 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2034684 ']' 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:12.168 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:12.168 [2024-07-24 19:51:03.702185] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:15:12.168 [2024-07-24 19:51:03.702236] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.168 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.168 [2024-07-24 19:51:03.756347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:12.475 [2024-07-24 19:51:03.837863] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.475 [2024-07-24 19:51:03.837898] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.476 [2024-07-24 19:51:03.837905] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.476 [2024-07-24 19:51:03.837912] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.476 [2024-07-24 19:51:03.837917] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.476 [2024-07-24 19:51:03.838027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.476 [2024-07-24 19:51:03.838055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.476 [2024-07-24 19:51:03.838058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.061 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:13.061 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:13.061 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:13.999 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:13.999 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:13.999 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:13.999 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.999 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:13.999 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.999 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:13.999 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:13.999 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.000 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:14.000 malloc0 00:15:14.000 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.000 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:14.000 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.000 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:14.000 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.000 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:14.000 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.000 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:14.000 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.000 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:14.000 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.000 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:14.000 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.000 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:14.258 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.258 00:15:14.258 00:15:14.258 CUnit - A unit testing framework for C - Version 2.1-3 00:15:14.258 http://cunit.sourceforge.net/ 00:15:14.258 00:15:14.258 00:15:14.258 Suite: nvme_compliance 00:15:14.258 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 19:51:05.735465] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:14.258 [2024-07-24 19:51:05.736797] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:14.258 [2024-07-24 19:51:05.736811] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:14.258 [2024-07-24 19:51:05.736816] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:14.258 [2024-07-24 19:51:05.738486] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:14.258 passed 00:15:14.258 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 19:51:05.817074] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:14.258 [2024-07-24 19:51:05.823127] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:14.258 passed 00:15:14.516 Test: admin_identify_ns ...[2024-07-24 19:51:05.896520] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:14.516 [2024-07-24 19:51:05.956057] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:14.516 [2024-07-24 19:51:05.964052] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:14.516 [2024-07-24 19:51:05.988191] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:14.516 passed 00:15:14.516 Test: admin_get_features_mandatory_features ...[2024-07-24 19:51:06.063379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:14.516 [2024-07-24 19:51:06.066396] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:14.516 passed 00:15:14.775 Test: admin_get_features_optional_features ...[2024-07-24 19:51:06.146991] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:14.775 [2024-07-24 19:51:06.150013] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:14.775 passed 00:15:14.775 Test: admin_set_features_number_of_queues ...[2024-07-24 19:51:06.226932] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:14.775 [2024-07-24 19:51:06.332134] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:14.775 passed 00:15:15.034 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 19:51:06.407313] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:15.034 [2024-07-24 19:51:06.410333] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:15.034 passed 00:15:15.034 Test: admin_get_log_page_with_lpo ...[2024-07-24 19:51:06.488315] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:15.034 [2024-07-24 19:51:06.557055] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:15.034 [2024-07-24 19:51:06.570108] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:15.034 passed 00:15:15.293 Test: fabric_property_get ...[2024-07-24 19:51:06.649155] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:15.293 [2024-07-24 19:51:06.650404] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:15.293 [2024-07-24 19:51:06.652189] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:15.293 passed 00:15:15.293 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 19:51:06.733708] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:15.293 [2024-07-24 19:51:06.734945] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:15.293 [2024-07-24 19:51:06.736727] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:15.293 passed 00:15:15.293 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 19:51:06.812645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:15.550 [2024-07-24 19:51:06.896055] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:15.550 [2024-07-24 19:51:06.912060] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:15.550 [2024-07-24 19:51:06.917144] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:15.550 passed 00:15:15.550 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 19:51:06.997147] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:15.550 [2024-07-24 19:51:06.998377] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:15.550 [2024-07-24 19:51:07.000173] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:15.550 passed 00:15:15.550 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 19:51:07.076026] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:15.809 [2024-07-24 19:51:07.153050] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:15.809 [2024-07-24 19:51:07.177067] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:15.809 [2024-07-24 19:51:07.182162] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:15.809 passed 00:15:15.809 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 19:51:07.257287] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:15.809 [2024-07-24 19:51:07.258514] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:15.809 [2024-07-24 19:51:07.258540] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:15.809 [2024-07-24 19:51:07.260311] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:15.809 passed 00:15:15.809 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 19:51:07.338205] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.068 [2024-07-24 19:51:07.430061] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:16.068 [2024-07-24 19:51:07.438051] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:16.068 [2024-07-24 19:51:07.446051] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:16.068 [2024-07-24 19:51:07.454049] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:16.068 [2024-07-24 19:51:07.483153] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.068 passed 00:15:16.068 Test: admin_create_io_sq_verify_pc ...[2024-07-24 19:51:07.560374] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.068 [2024-07-24 19:51:07.579056] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:16.068 [2024-07-24 19:51:07.594544] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.068 passed 00:15:16.327 Test: admin_create_io_qp_max_qps ...[2024-07-24 19:51:07.672095] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.261 [2024-07-24 19:51:08.786054] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:17.826 [2024-07-24 19:51:09.163696] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.826 passed 00:15:17.826 Test: admin_create_io_sq_shared_cq ...[2024-07-24 19:51:09.240782] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.826 [2024-07-24 19:51:09.373063] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:17.826 [2024-07-24 19:51:09.410129] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.084 passed 00:15:18.084 00:15:18.084 Run Summary: Type Total Ran Passed Failed Inactive 00:15:18.084 suites 1 1 n/a 0 0 00:15:18.084 tests 18 18 18 0 0 00:15:18.084 asserts 360 360 360 0 n/a 00:15:18.084 00:15:18.084 Elapsed time = 1.512 seconds 00:15:18.084 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2034684 00:15:18.084 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2034684 ']' 00:15:18.084 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2034684 00:15:18.084 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:18.084 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:18.084 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2034684 00:15:18.084 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:18.084 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:18.084 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2034684' 00:15:18.084 killing process with pid 2034684 00:15:18.084 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2034684 00:15:18.084 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2034684 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:18.344 00:15:18.344 real 0m6.132s 00:15:18.344 user 0m17.567s 00:15:18.344 sys 0m0.432s 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:18.344 ************************************ 00:15:18.344 END TEST nvmf_vfio_user_nvme_compliance 00:15:18.344 ************************************ 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:18.344 ************************************ 00:15:18.344 START TEST nvmf_vfio_user_fuzz 00:15:18.344 ************************************ 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:18.344 * Looking for test storage... 00:15:18.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.344 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2036054 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2036054' 00:15:18.345 Process pid: 2036054 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2036054 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2036054 ']' 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:18.345 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:19.284 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:19.284 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:19.284 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:20.221 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:20.222 malloc0 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:20.222 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:52.314 Fuzzing completed. Shutting down the fuzz application 00:15:52.314 00:15:52.314 Dumping successful admin opcodes: 00:15:52.314 8, 9, 10, 24, 00:15:52.314 Dumping successful io opcodes: 00:15:52.314 0, 00:15:52.314 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1145791, total successful commands: 4513, random_seed: 3567913984 00:15:52.314 NS: 0x200003a1ef00 admin qp, Total commands completed: 284345, total successful commands: 2295, random_seed: 1969750464 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2036054 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2036054 ']' 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2036054 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2036054 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2036054' 00:15:52.314 killing process with pid 2036054 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2036054 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2036054 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:52.314 00:15:52.314 real 0m32.885s 00:15:52.314 user 0m35.445s 00:15:52.314 sys 0m26.346s 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.314 ************************************ 00:15:52.314 END TEST nvmf_vfio_user_fuzz 00:15:52.314 ************************************ 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:52.314 ************************************ 00:15:52.314 START TEST nvmf_auth_target 00:15:52.314 ************************************ 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:52.314 * Looking for test storage... 00:15:52.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:52.314 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:52.315 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:56.567 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:56.567 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.567 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:56.568 Found net devices under 0000:86:00.0: cvl_0_0 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:56.568 Found net devices under 0000:86:00.1: cvl_0_1 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:56.568 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:56.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:15:56.568 00:15:56.568 --- 10.0.0.2 ping statistics --- 00:15:56.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.568 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:15:56.568 00:15:56.568 --- 10.0.0.1 ping statistics --- 00:15:56.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.568 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2044378 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2044378 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2044378 ']' 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.568 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2044610 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b308b7317cf0581caf0a9b1bde25b844a0848ff77150d1de 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Mi0 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b308b7317cf0581caf0a9b1bde25b844a0848ff77150d1de 0 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b308b7317cf0581caf0a9b1bde25b844a0848ff77150d1de 0 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b308b7317cf0581caf0a9b1bde25b844a0848ff77150d1de 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Mi0 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Mi0 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Mi0 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:57.507 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0fba2f71fe4b79f12d59ff5590e8102db64424df0288bac6ff2062b708e985d1 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qY3 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0fba2f71fe4b79f12d59ff5590e8102db64424df0288bac6ff2062b708e985d1 3 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0fba2f71fe4b79f12d59ff5590e8102db64424df0288bac6ff2062b708e985d1 3 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0fba2f71fe4b79f12d59ff5590e8102db64424df0288bac6ff2062b708e985d1 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qY3 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qY3 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.qY3 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7448ce5a7284ca7d296844d35f09550a 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6ya 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7448ce5a7284ca7d296844d35f09550a 1 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7448ce5a7284ca7d296844d35f09550a 1 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7448ce5a7284ca7d296844d35f09550a 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6ya 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6ya 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.6ya 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:57.766 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e1f23968e4fc61294cba17f73c9fd751ab257b2386ef4832 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JWw 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e1f23968e4fc61294cba17f73c9fd751ab257b2386ef4832 2 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e1f23968e4fc61294cba17f73c9fd751ab257b2386ef4832 2 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e1f23968e4fc61294cba17f73c9fd751ab257b2386ef4832 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JWw 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JWw 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.JWw 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dcbafaf3bca2a7a55c9e933a6ea56385e725e33c02d09831 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.88l 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dcbafaf3bca2a7a55c9e933a6ea56385e725e33c02d09831 2 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dcbafaf3bca2a7a55c9e933a6ea56385e725e33c02d09831 2 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dcbafaf3bca2a7a55c9e933a6ea56385e725e33c02d09831 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.88l 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.88l 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.88l 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cb9c726ebe0ac34cb51cfe626b8e8b35 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.TvB 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cb9c726ebe0ac34cb51cfe626b8e8b35 1 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cb9c726ebe0ac34cb51cfe626b8e8b35 1 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cb9c726ebe0ac34cb51cfe626b8e8b35 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:57.767 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.TvB 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.TvB 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.TvB 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=19536740d571a6f7f80cf0ee717d5532bccf871874a9ebcc9d8e9f38fbdc4179 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KZl 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 19536740d571a6f7f80cf0ee717d5532bccf871874a9ebcc9d8e9f38fbdc4179 3 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 19536740d571a6f7f80cf0ee717d5532bccf871874a9ebcc9d8e9f38fbdc4179 3 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=19536740d571a6f7f80cf0ee717d5532bccf871874a9ebcc9d8e9f38fbdc4179 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KZl 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KZl 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.KZl 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2044378 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2044378 ']' 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.027 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2044610 /var/tmp/host.sock 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2044610 ']' 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:58.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Mi0 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Mi0 00:15:58.287 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Mi0 00:15:58.546 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.qY3 ]] 00:15:58.546 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qY3 00:15:58.546 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.546 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.546 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.546 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qY3 00:15:58.546 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qY3 00:15:58.805 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:58.805 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.6ya 00:15:58.805 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.805 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.805 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.805 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.6ya 00:15:58.805 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.6ya 00:15:59.065 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.JWw ]] 00:15:59.065 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JWw 00:15:59.065 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.065 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.065 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.065 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JWw 00:15:59.065 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JWw 00:15:59.065 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:59.065 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.88l 00:15:59.065 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.065 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.065 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.065 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.88l 00:15:59.065 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.88l 00:15:59.325 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.TvB ]] 00:15:59.325 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TvB 00:15:59.325 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.325 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.325 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.325 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TvB 00:15:59.325 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TvB 00:15:59.585 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:59.585 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.KZl 00:15:59.585 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.585 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.585 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.585 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.KZl 00:15:59.585 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.KZl 00:15:59.585 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:59.585 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:59.585 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.585 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.585 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:59.585 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:59.844 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:59.844 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.844 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:59.844 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:59.844 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:59.844 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.844 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.844 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.844 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.844 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.844 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.844 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.104 00:16:00.104 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.104 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.104 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.364 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.364 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.364 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.364 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.364 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.364 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.364 { 00:16:00.364 "cntlid": 1, 00:16:00.364 "qid": 0, 00:16:00.364 "state": "enabled", 00:16:00.364 "thread": "nvmf_tgt_poll_group_000", 00:16:00.364 "listen_address": { 00:16:00.364 "trtype": "TCP", 00:16:00.364 "adrfam": "IPv4", 00:16:00.364 "traddr": "10.0.0.2", 00:16:00.364 "trsvcid": "4420" 00:16:00.364 }, 00:16:00.364 "peer_address": { 00:16:00.364 "trtype": "TCP", 00:16:00.364 "adrfam": "IPv4", 00:16:00.364 "traddr": "10.0.0.1", 00:16:00.364 "trsvcid": "54442" 00:16:00.364 }, 00:16:00.364 "auth": { 00:16:00.364 "state": "completed", 00:16:00.364 "digest": "sha256", 00:16:00.364 "dhgroup": "null" 00:16:00.364 } 00:16:00.364 } 00:16:00.364 ]' 00:16:00.364 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.364 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.364 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.364 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:00.364 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.364 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.364 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.364 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.624 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:16:01.193 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.193 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.194 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.194 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.194 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.194 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.194 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:01.194 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:01.454 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:01.454 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.454 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:01.454 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:01.454 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:01.454 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.454 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.454 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.454 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.454 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.454 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.454 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.454 00:16:01.454 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.454 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.714 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.714 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.714 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.714 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.714 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.714 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.714 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.714 { 00:16:01.714 "cntlid": 3, 00:16:01.714 "qid": 0, 00:16:01.714 "state": "enabled", 00:16:01.714 "thread": "nvmf_tgt_poll_group_000", 00:16:01.714 "listen_address": { 00:16:01.714 "trtype": "TCP", 00:16:01.714 "adrfam": "IPv4", 00:16:01.714 "traddr": "10.0.0.2", 00:16:01.714 "trsvcid": "4420" 00:16:01.714 }, 00:16:01.714 "peer_address": { 00:16:01.714 "trtype": "TCP", 00:16:01.714 "adrfam": "IPv4", 00:16:01.714 "traddr": "10.0.0.1", 00:16:01.714 "trsvcid": "54460" 00:16:01.714 }, 00:16:01.714 "auth": { 00:16:01.714 "state": "completed", 00:16:01.714 "digest": "sha256", 00:16:01.714 "dhgroup": "null" 00:16:01.714 } 00:16:01.714 } 00:16:01.714 ]' 00:16:01.714 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.714 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.714 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.973 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:01.973 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.973 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.973 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.973 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.973 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:16:02.543 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.543 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.543 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.543 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.543 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.543 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.543 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:02.543 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:02.803 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:02.803 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.803 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:02.803 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:02.803 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:02.803 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.803 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.803 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.803 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.803 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.803 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.803 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.063 00:16:03.063 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.063 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.063 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.323 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.323 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.323 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.323 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.323 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.323 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.323 { 00:16:03.323 "cntlid": 5, 00:16:03.323 "qid": 0, 00:16:03.323 "state": "enabled", 00:16:03.323 "thread": "nvmf_tgt_poll_group_000", 00:16:03.323 "listen_address": { 00:16:03.323 "trtype": "TCP", 00:16:03.323 "adrfam": "IPv4", 00:16:03.323 "traddr": "10.0.0.2", 00:16:03.323 "trsvcid": "4420" 00:16:03.323 }, 00:16:03.323 "peer_address": { 00:16:03.323 "trtype": "TCP", 00:16:03.323 "adrfam": "IPv4", 00:16:03.323 "traddr": "10.0.0.1", 00:16:03.323 "trsvcid": "54482" 00:16:03.323 }, 00:16:03.323 "auth": { 00:16:03.323 "state": "completed", 00:16:03.323 "digest": "sha256", 00:16:03.323 "dhgroup": "null" 00:16:03.323 } 00:16:03.323 } 00:16:03.323 ]' 00:16:03.323 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.323 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.323 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.324 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:03.324 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.324 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.324 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.324 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.584 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.153 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.413 00:16:04.413 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.413 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.413 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.673 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.673 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.673 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.673 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.673 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.673 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.673 { 00:16:04.673 "cntlid": 7, 00:16:04.673 "qid": 0, 00:16:04.673 "state": "enabled", 00:16:04.673 "thread": "nvmf_tgt_poll_group_000", 00:16:04.673 "listen_address": { 00:16:04.673 "trtype": "TCP", 00:16:04.673 "adrfam": "IPv4", 00:16:04.673 "traddr": "10.0.0.2", 00:16:04.673 "trsvcid": "4420" 00:16:04.673 }, 00:16:04.673 "peer_address": { 00:16:04.673 "trtype": "TCP", 00:16:04.673 "adrfam": "IPv4", 00:16:04.673 "traddr": "10.0.0.1", 00:16:04.673 "trsvcid": "54514" 00:16:04.673 }, 00:16:04.673 "auth": { 00:16:04.673 "state": "completed", 00:16:04.673 "digest": "sha256", 00:16:04.673 "dhgroup": "null" 00:16:04.673 } 00:16:04.673 } 00:16:04.673 ]' 00:16:04.673 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.673 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.673 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.673 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:04.673 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.673 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.673 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.673 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.933 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:16:05.502 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.502 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.502 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.502 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.502 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.502 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.502 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.502 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:05.502 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:05.763 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:05.763 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.763 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:05.763 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:05.763 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:05.763 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.763 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.763 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.763 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.763 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.763 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.763 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.023 00:16:06.023 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.023 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.023 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.023 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.023 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.023 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.023 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.023 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.023 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.023 { 00:16:06.023 "cntlid": 9, 00:16:06.023 "qid": 0, 00:16:06.023 "state": "enabled", 00:16:06.023 "thread": "nvmf_tgt_poll_group_000", 00:16:06.023 "listen_address": { 00:16:06.023 "trtype": "TCP", 00:16:06.023 "adrfam": "IPv4", 00:16:06.023 "traddr": "10.0.0.2", 00:16:06.023 "trsvcid": "4420" 00:16:06.023 }, 00:16:06.023 "peer_address": { 00:16:06.023 "trtype": "TCP", 00:16:06.023 "adrfam": "IPv4", 00:16:06.023 "traddr": "10.0.0.1", 00:16:06.023 "trsvcid": "58554" 00:16:06.023 }, 00:16:06.023 "auth": { 00:16:06.023 "state": "completed", 00:16:06.023 "digest": "sha256", 00:16:06.023 "dhgroup": "ffdhe2048" 00:16:06.023 } 00:16:06.023 } 00:16:06.023 ]' 00:16:06.023 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.283 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.283 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.283 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:06.283 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.283 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.283 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.283 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.544 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.114 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.374 00:16:07.374 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.374 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.374 19:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.634 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.634 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.634 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.634 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.634 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.634 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.634 { 00:16:07.634 "cntlid": 11, 00:16:07.634 "qid": 0, 00:16:07.634 "state": "enabled", 00:16:07.634 "thread": "nvmf_tgt_poll_group_000", 00:16:07.634 "listen_address": { 00:16:07.634 "trtype": "TCP", 00:16:07.634 "adrfam": "IPv4", 00:16:07.634 "traddr": "10.0.0.2", 00:16:07.634 "trsvcid": "4420" 00:16:07.634 }, 00:16:07.634 "peer_address": { 00:16:07.634 "trtype": "TCP", 00:16:07.634 "adrfam": "IPv4", 00:16:07.634 "traddr": "10.0.0.1", 00:16:07.634 "trsvcid": "58570" 00:16:07.634 }, 00:16:07.634 "auth": { 00:16:07.634 "state": "completed", 00:16:07.634 "digest": "sha256", 00:16:07.634 "dhgroup": "ffdhe2048" 00:16:07.634 } 00:16:07.634 } 00:16:07.634 ]' 00:16:07.634 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.634 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.634 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.634 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:07.634 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.634 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.634 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.634 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.894 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:16:08.464 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.464 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.464 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.464 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.464 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.464 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.464 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:08.464 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:08.725 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:08.725 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.725 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:08.725 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:08.725 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:08.725 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.725 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.725 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.725 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.725 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.725 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.726 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.986 00:16:08.986 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.986 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.986 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.986 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.986 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.986 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.986 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.986 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.986 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.986 { 00:16:08.986 "cntlid": 13, 00:16:08.986 "qid": 0, 00:16:08.986 "state": "enabled", 00:16:08.986 "thread": "nvmf_tgt_poll_group_000", 00:16:08.986 "listen_address": { 00:16:08.986 "trtype": "TCP", 00:16:08.986 "adrfam": "IPv4", 00:16:08.986 "traddr": "10.0.0.2", 00:16:08.986 "trsvcid": "4420" 00:16:08.986 }, 00:16:08.986 "peer_address": { 00:16:08.986 "trtype": "TCP", 00:16:08.986 "adrfam": "IPv4", 00:16:08.986 "traddr": "10.0.0.1", 00:16:08.986 "trsvcid": "58588" 00:16:08.986 }, 00:16:08.986 "auth": { 00:16:08.986 "state": "completed", 00:16:08.986 "digest": "sha256", 00:16:08.986 "dhgroup": "ffdhe2048" 00:16:08.986 } 00:16:08.986 } 00:16:08.986 ]' 00:16:08.986 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.246 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.246 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.246 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:09.246 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.246 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.246 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.246 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.506 19:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.078 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.373 00:16:10.373 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.373 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.373 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.633 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.633 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.633 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.633 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.633 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.633 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.633 { 00:16:10.633 "cntlid": 15, 00:16:10.633 "qid": 0, 00:16:10.633 "state": "enabled", 00:16:10.633 "thread": "nvmf_tgt_poll_group_000", 00:16:10.633 "listen_address": { 00:16:10.633 "trtype": "TCP", 00:16:10.633 "adrfam": "IPv4", 00:16:10.633 "traddr": "10.0.0.2", 00:16:10.633 "trsvcid": "4420" 00:16:10.633 }, 00:16:10.633 "peer_address": { 00:16:10.633 "trtype": "TCP", 00:16:10.633 "adrfam": "IPv4", 00:16:10.633 "traddr": "10.0.0.1", 00:16:10.633 "trsvcid": "58624" 00:16:10.633 }, 00:16:10.633 "auth": { 00:16:10.633 "state": "completed", 00:16:10.633 "digest": "sha256", 00:16:10.633 "dhgroup": "ffdhe2048" 00:16:10.633 } 00:16:10.633 } 00:16:10.633 ]' 00:16:10.633 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.633 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.633 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.633 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:10.633 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.633 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.633 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.633 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.893 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:16:11.461 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.461 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.461 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.461 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.461 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.461 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.461 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.461 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:11.461 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:11.721 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:11.721 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.721 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.721 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:11.721 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:11.721 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.721 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.721 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.721 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.721 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.721 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.721 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.721 00:16:11.980 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.980 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.980 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.980 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.980 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.980 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.980 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.980 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.980 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.980 { 00:16:11.980 "cntlid": 17, 00:16:11.980 "qid": 0, 00:16:11.980 "state": "enabled", 00:16:11.980 "thread": "nvmf_tgt_poll_group_000", 00:16:11.980 "listen_address": { 00:16:11.980 "trtype": "TCP", 00:16:11.980 "adrfam": "IPv4", 00:16:11.980 "traddr": "10.0.0.2", 00:16:11.980 "trsvcid": "4420" 00:16:11.980 }, 00:16:11.980 "peer_address": { 00:16:11.980 "trtype": "TCP", 00:16:11.980 "adrfam": "IPv4", 00:16:11.980 "traddr": "10.0.0.1", 00:16:11.980 "trsvcid": "58650" 00:16:11.980 }, 00:16:11.980 "auth": { 00:16:11.980 "state": "completed", 00:16:11.980 "digest": "sha256", 00:16:11.980 "dhgroup": "ffdhe3072" 00:16:11.980 } 00:16:11.980 } 00:16:11.980 ]' 00:16:11.980 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.980 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.980 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.239 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:12.239 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.239 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.239 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.239 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.239 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:16:12.807 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.807 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.807 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.807 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.807 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.807 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.807 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:12.807 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:13.067 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:13.067 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.067 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:13.067 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:13.067 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:13.067 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.067 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.067 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.067 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.067 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.067 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.067 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.327 00:16:13.327 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.327 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.327 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.587 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.587 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.587 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.587 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.587 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.587 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.587 { 00:16:13.587 "cntlid": 19, 00:16:13.587 "qid": 0, 00:16:13.587 "state": "enabled", 00:16:13.587 "thread": "nvmf_tgt_poll_group_000", 00:16:13.587 "listen_address": { 00:16:13.587 "trtype": "TCP", 00:16:13.587 "adrfam": "IPv4", 00:16:13.587 "traddr": "10.0.0.2", 00:16:13.587 "trsvcid": "4420" 00:16:13.587 }, 00:16:13.587 "peer_address": { 00:16:13.587 "trtype": "TCP", 00:16:13.587 "adrfam": "IPv4", 00:16:13.587 "traddr": "10.0.0.1", 00:16:13.587 "trsvcid": "58688" 00:16:13.587 }, 00:16:13.587 "auth": { 00:16:13.587 "state": "completed", 00:16:13.587 "digest": "sha256", 00:16:13.587 "dhgroup": "ffdhe3072" 00:16:13.587 } 00:16:13.587 } 00:16:13.587 ]' 00:16:13.587 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.587 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.587 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.587 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:13.587 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.587 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.587 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.587 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.846 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:16:14.416 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.416 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.416 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.417 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.417 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.417 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.417 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:14.417 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:14.677 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:14.677 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.677 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:14.677 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:14.677 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:14.677 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.677 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.677 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.677 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.677 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.677 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.677 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.936 00:16:14.936 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.936 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.936 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.936 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.936 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.936 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.936 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.936 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.936 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.936 { 00:16:14.936 "cntlid": 21, 00:16:14.936 "qid": 0, 00:16:14.936 "state": "enabled", 00:16:14.936 "thread": "nvmf_tgt_poll_group_000", 00:16:14.936 "listen_address": { 00:16:14.936 "trtype": "TCP", 00:16:14.936 "adrfam": "IPv4", 00:16:14.936 "traddr": "10.0.0.2", 00:16:14.936 "trsvcid": "4420" 00:16:14.936 }, 00:16:14.936 "peer_address": { 00:16:14.936 "trtype": "TCP", 00:16:14.936 "adrfam": "IPv4", 00:16:14.936 "traddr": "10.0.0.1", 00:16:14.936 "trsvcid": "58730" 00:16:14.936 }, 00:16:14.936 "auth": { 00:16:14.936 "state": "completed", 00:16:14.936 "digest": "sha256", 00:16:14.936 "dhgroup": "ffdhe3072" 00:16:14.936 } 00:16:14.936 } 00:16:14.936 ]' 00:16:14.936 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.195 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.195 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.195 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:15.196 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.196 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.196 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.196 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.455 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:16.024 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:16.284 00:16:16.284 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.284 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.284 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.544 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.544 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.544 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.544 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.544 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.544 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.544 { 00:16:16.544 "cntlid": 23, 00:16:16.545 "qid": 0, 00:16:16.545 "state": "enabled", 00:16:16.545 "thread": "nvmf_tgt_poll_group_000", 00:16:16.545 "listen_address": { 00:16:16.545 "trtype": "TCP", 00:16:16.545 "adrfam": "IPv4", 00:16:16.545 "traddr": "10.0.0.2", 00:16:16.545 "trsvcid": "4420" 00:16:16.545 }, 00:16:16.545 "peer_address": { 00:16:16.545 "trtype": "TCP", 00:16:16.545 "adrfam": "IPv4", 00:16:16.545 "traddr": "10.0.0.1", 00:16:16.545 "trsvcid": "36394" 00:16:16.545 }, 00:16:16.545 "auth": { 00:16:16.545 "state": "completed", 00:16:16.545 "digest": "sha256", 00:16:16.545 "dhgroup": "ffdhe3072" 00:16:16.545 } 00:16:16.545 } 00:16:16.545 ]' 00:16:16.545 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.545 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.545 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.545 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:16.545 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.545 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.545 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.545 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.805 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:16:17.374 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.374 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.374 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.374 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.374 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.374 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.374 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.374 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:17.374 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:17.634 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:17.634 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.634 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:17.634 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:17.634 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:17.634 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.634 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.634 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.634 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.634 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.634 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.635 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.894 00:16:17.894 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.894 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.894 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.894 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.894 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.894 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.894 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.894 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.894 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.895 { 00:16:17.895 "cntlid": 25, 00:16:17.895 "qid": 0, 00:16:17.895 "state": "enabled", 00:16:17.895 "thread": "nvmf_tgt_poll_group_000", 00:16:17.895 "listen_address": { 00:16:17.895 "trtype": "TCP", 00:16:17.895 "adrfam": "IPv4", 00:16:17.895 "traddr": "10.0.0.2", 00:16:17.895 "trsvcid": "4420" 00:16:17.895 }, 00:16:17.895 "peer_address": { 00:16:17.895 "trtype": "TCP", 00:16:17.895 "adrfam": "IPv4", 00:16:17.895 "traddr": "10.0.0.1", 00:16:17.895 "trsvcid": "36412" 00:16:17.895 }, 00:16:17.895 "auth": { 00:16:17.895 "state": "completed", 00:16:17.895 "digest": "sha256", 00:16:17.895 "dhgroup": "ffdhe4096" 00:16:17.895 } 00:16:17.895 } 00:16:17.895 ]' 00:16:17.895 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.154 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.154 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.154 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:18.154 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.154 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.154 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.154 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.414 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:16:18.984 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.985 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.245 00:16:19.245 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.245 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.245 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.504 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.504 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.504 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.504 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.504 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.504 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.504 { 00:16:19.504 "cntlid": 27, 00:16:19.504 "qid": 0, 00:16:19.504 "state": "enabled", 00:16:19.504 "thread": "nvmf_tgt_poll_group_000", 00:16:19.504 "listen_address": { 00:16:19.504 "trtype": "TCP", 00:16:19.504 "adrfam": "IPv4", 00:16:19.504 "traddr": "10.0.0.2", 00:16:19.504 "trsvcid": "4420" 00:16:19.504 }, 00:16:19.504 "peer_address": { 00:16:19.504 "trtype": "TCP", 00:16:19.504 "adrfam": "IPv4", 00:16:19.504 "traddr": "10.0.0.1", 00:16:19.504 "trsvcid": "36444" 00:16:19.504 }, 00:16:19.504 "auth": { 00:16:19.504 "state": "completed", 00:16:19.504 "digest": "sha256", 00:16:19.504 "dhgroup": "ffdhe4096" 00:16:19.504 } 00:16:19.504 } 00:16:19.504 ]' 00:16:19.504 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.504 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.504 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.504 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:19.504 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.504 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.504 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.504 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.763 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.332 19:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.591 00:16:20.851 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.851 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.851 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.851 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.851 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.851 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.851 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.851 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.851 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.851 { 00:16:20.851 "cntlid": 29, 00:16:20.851 "qid": 0, 00:16:20.851 "state": "enabled", 00:16:20.851 "thread": "nvmf_tgt_poll_group_000", 00:16:20.851 "listen_address": { 00:16:20.851 "trtype": "TCP", 00:16:20.851 "adrfam": "IPv4", 00:16:20.851 "traddr": "10.0.0.2", 00:16:20.851 "trsvcid": "4420" 00:16:20.851 }, 00:16:20.851 "peer_address": { 00:16:20.851 "trtype": "TCP", 00:16:20.851 "adrfam": "IPv4", 00:16:20.851 "traddr": "10.0.0.1", 00:16:20.851 "trsvcid": "36458" 00:16:20.851 }, 00:16:20.851 "auth": { 00:16:20.851 "state": "completed", 00:16:20.851 "digest": "sha256", 00:16:20.851 "dhgroup": "ffdhe4096" 00:16:20.851 } 00:16:20.851 } 00:16:20.851 ]' 00:16:20.851 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.851 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.851 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.111 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:21.111 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.111 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.111 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.111 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.111 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:16:21.679 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.679 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.679 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.679 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.679 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.679 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.679 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:21.679 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:21.945 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:21.946 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.946 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:21.946 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:21.946 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:21.946 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.946 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:21.946 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.946 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.946 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.946 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.946 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.208 00:16:22.208 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.208 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.208 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.467 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.467 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.467 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.467 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.467 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.467 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.467 { 00:16:22.467 "cntlid": 31, 00:16:22.467 "qid": 0, 00:16:22.467 "state": "enabled", 00:16:22.467 "thread": "nvmf_tgt_poll_group_000", 00:16:22.467 "listen_address": { 00:16:22.467 "trtype": "TCP", 00:16:22.467 "adrfam": "IPv4", 00:16:22.467 "traddr": "10.0.0.2", 00:16:22.467 "trsvcid": "4420" 00:16:22.467 }, 00:16:22.467 "peer_address": { 00:16:22.467 "trtype": "TCP", 00:16:22.467 "adrfam": "IPv4", 00:16:22.467 "traddr": "10.0.0.1", 00:16:22.467 "trsvcid": "36466" 00:16:22.467 }, 00:16:22.467 "auth": { 00:16:22.467 "state": "completed", 00:16:22.467 "digest": "sha256", 00:16:22.467 "dhgroup": "ffdhe4096" 00:16:22.467 } 00:16:22.467 } 00:16:22.467 ]' 00:16:22.467 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.467 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.467 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.467 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:22.467 19:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.467 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.467 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.467 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.727 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:16:23.297 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.297 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.297 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.297 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.297 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.297 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.297 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.297 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:23.297 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:23.556 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:23.556 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.556 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:23.556 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:23.557 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:23.557 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.557 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.557 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.557 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.557 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.557 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.557 19:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.818 00:16:23.818 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.818 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.818 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.078 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.078 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.078 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.078 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.078 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.078 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.078 { 00:16:24.078 "cntlid": 33, 00:16:24.078 "qid": 0, 00:16:24.078 "state": "enabled", 00:16:24.078 "thread": "nvmf_tgt_poll_group_000", 00:16:24.078 "listen_address": { 00:16:24.078 "trtype": "TCP", 00:16:24.078 "adrfam": "IPv4", 00:16:24.078 "traddr": "10.0.0.2", 00:16:24.078 "trsvcid": "4420" 00:16:24.078 }, 00:16:24.078 "peer_address": { 00:16:24.078 "trtype": "TCP", 00:16:24.078 "adrfam": "IPv4", 00:16:24.078 "traddr": "10.0.0.1", 00:16:24.078 "trsvcid": "36484" 00:16:24.078 }, 00:16:24.078 "auth": { 00:16:24.078 "state": "completed", 00:16:24.078 "digest": "sha256", 00:16:24.078 "dhgroup": "ffdhe6144" 00:16:24.078 } 00:16:24.078 } 00:16:24.078 ]' 00:16:24.078 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.078 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.078 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.078 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:24.078 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.078 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.078 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.078 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.338 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.907 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.476 00:16:25.476 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.476 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.476 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.476 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.476 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.476 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.476 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.476 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.476 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.476 { 00:16:25.476 "cntlid": 35, 00:16:25.476 "qid": 0, 00:16:25.476 "state": "enabled", 00:16:25.476 "thread": "nvmf_tgt_poll_group_000", 00:16:25.476 "listen_address": { 00:16:25.476 "trtype": "TCP", 00:16:25.476 "adrfam": "IPv4", 00:16:25.476 "traddr": "10.0.0.2", 00:16:25.476 "trsvcid": "4420" 00:16:25.476 }, 00:16:25.476 "peer_address": { 00:16:25.476 "trtype": "TCP", 00:16:25.476 "adrfam": "IPv4", 00:16:25.476 "traddr": "10.0.0.1", 00:16:25.476 "trsvcid": "53652" 00:16:25.476 }, 00:16:25.476 "auth": { 00:16:25.476 "state": "completed", 00:16:25.476 "digest": "sha256", 00:16:25.476 "dhgroup": "ffdhe6144" 00:16:25.476 } 00:16:25.476 } 00:16:25.476 ]' 00:16:25.476 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.476 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.476 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.476 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:25.476 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.736 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.736 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.736 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.736 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:16:26.305 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.306 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.306 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.306 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.306 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.306 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.306 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:26.306 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:26.565 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:26.565 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.565 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:26.565 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:26.565 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:26.565 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.565 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.565 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.565 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.565 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.566 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.566 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.825 00:16:26.825 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.825 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.825 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.085 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.085 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.085 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.085 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.085 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.085 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.085 { 00:16:27.085 "cntlid": 37, 00:16:27.085 "qid": 0, 00:16:27.085 "state": "enabled", 00:16:27.085 "thread": "nvmf_tgt_poll_group_000", 00:16:27.085 "listen_address": { 00:16:27.085 "trtype": "TCP", 00:16:27.085 "adrfam": "IPv4", 00:16:27.085 "traddr": "10.0.0.2", 00:16:27.085 "trsvcid": "4420" 00:16:27.085 }, 00:16:27.085 "peer_address": { 00:16:27.085 "trtype": "TCP", 00:16:27.085 "adrfam": "IPv4", 00:16:27.085 "traddr": "10.0.0.1", 00:16:27.085 "trsvcid": "53672" 00:16:27.085 }, 00:16:27.085 "auth": { 00:16:27.085 "state": "completed", 00:16:27.085 "digest": "sha256", 00:16:27.085 "dhgroup": "ffdhe6144" 00:16:27.085 } 00:16:27.085 } 00:16:27.085 ]' 00:16:27.085 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.085 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.085 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.085 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:27.085 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.085 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.085 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.085 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.345 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:16:27.913 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.913 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.913 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.913 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.914 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.914 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.914 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:27.914 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:28.173 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:28.173 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.173 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:28.173 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:28.173 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:28.173 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.174 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:28.174 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.174 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.174 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.174 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.174 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.434 00:16:28.434 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.434 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.434 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.693 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.693 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.693 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.693 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.693 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.693 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.693 { 00:16:28.693 "cntlid": 39, 00:16:28.693 "qid": 0, 00:16:28.693 "state": "enabled", 00:16:28.693 "thread": "nvmf_tgt_poll_group_000", 00:16:28.693 "listen_address": { 00:16:28.693 "trtype": "TCP", 00:16:28.693 "adrfam": "IPv4", 00:16:28.693 "traddr": "10.0.0.2", 00:16:28.693 "trsvcid": "4420" 00:16:28.693 }, 00:16:28.693 "peer_address": { 00:16:28.693 "trtype": "TCP", 00:16:28.693 "adrfam": "IPv4", 00:16:28.693 "traddr": "10.0.0.1", 00:16:28.693 "trsvcid": "53704" 00:16:28.693 }, 00:16:28.693 "auth": { 00:16:28.693 "state": "completed", 00:16:28.693 "digest": "sha256", 00:16:28.693 "dhgroup": "ffdhe6144" 00:16:28.693 } 00:16:28.693 } 00:16:28.693 ]' 00:16:28.693 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.693 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.694 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.694 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:28.694 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.694 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.694 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.694 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.953 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:16:29.521 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.521 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.521 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.521 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.521 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.521 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.521 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.521 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:29.521 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:29.781 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:29.781 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.781 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:29.781 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:29.781 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:29.781 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.781 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.781 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.781 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.781 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.781 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.781 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.039 00:16:30.039 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.039 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.039 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.298 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.298 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.298 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.298 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.298 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.298 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.298 { 00:16:30.298 "cntlid": 41, 00:16:30.298 "qid": 0, 00:16:30.298 "state": "enabled", 00:16:30.298 "thread": "nvmf_tgt_poll_group_000", 00:16:30.298 "listen_address": { 00:16:30.298 "trtype": "TCP", 00:16:30.298 "adrfam": "IPv4", 00:16:30.298 "traddr": "10.0.0.2", 00:16:30.298 "trsvcid": "4420" 00:16:30.298 }, 00:16:30.298 "peer_address": { 00:16:30.298 "trtype": "TCP", 00:16:30.298 "adrfam": "IPv4", 00:16:30.298 "traddr": "10.0.0.1", 00:16:30.298 "trsvcid": "53722" 00:16:30.298 }, 00:16:30.298 "auth": { 00:16:30.298 "state": "completed", 00:16:30.298 "digest": "sha256", 00:16:30.298 "dhgroup": "ffdhe8192" 00:16:30.298 } 00:16:30.298 } 00:16:30.298 ]' 00:16:30.298 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.298 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.298 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.557 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:30.557 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.557 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.557 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.557 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.557 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:16:31.125 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.125 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.125 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.125 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.125 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.125 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.125 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:31.125 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:31.384 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:31.384 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.385 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:31.385 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:31.385 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:31.385 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.385 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.385 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.385 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.385 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.385 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.385 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.952 00:16:31.952 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.952 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.952 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.952 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.952 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.952 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.952 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.952 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.952 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.952 { 00:16:31.952 "cntlid": 43, 00:16:31.952 "qid": 0, 00:16:31.952 "state": "enabled", 00:16:31.952 "thread": "nvmf_tgt_poll_group_000", 00:16:31.952 "listen_address": { 00:16:31.952 "trtype": "TCP", 00:16:31.952 "adrfam": "IPv4", 00:16:31.952 "traddr": "10.0.0.2", 00:16:31.952 "trsvcid": "4420" 00:16:31.952 }, 00:16:31.952 "peer_address": { 00:16:31.952 "trtype": "TCP", 00:16:31.952 "adrfam": "IPv4", 00:16:31.952 "traddr": "10.0.0.1", 00:16:31.952 "trsvcid": "53752" 00:16:31.952 }, 00:16:31.952 "auth": { 00:16:31.952 "state": "completed", 00:16:31.952 "digest": "sha256", 00:16:31.952 "dhgroup": "ffdhe8192" 00:16:31.952 } 00:16:31.952 } 00:16:31.952 ]' 00:16:32.210 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.210 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.210 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.210 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:32.210 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.210 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.210 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.210 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.470 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.037 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.038 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.038 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.038 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.606 00:16:33.606 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.606 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.606 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.865 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.865 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.865 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.865 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.865 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.865 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.865 { 00:16:33.865 "cntlid": 45, 00:16:33.865 "qid": 0, 00:16:33.865 "state": "enabled", 00:16:33.865 "thread": "nvmf_tgt_poll_group_000", 00:16:33.865 "listen_address": { 00:16:33.865 "trtype": "TCP", 00:16:33.865 "adrfam": "IPv4", 00:16:33.865 "traddr": "10.0.0.2", 00:16:33.865 "trsvcid": "4420" 00:16:33.865 }, 00:16:33.865 "peer_address": { 00:16:33.865 "trtype": "TCP", 00:16:33.866 "adrfam": "IPv4", 00:16:33.866 "traddr": "10.0.0.1", 00:16:33.866 "trsvcid": "53774" 00:16:33.866 }, 00:16:33.866 "auth": { 00:16:33.866 "state": "completed", 00:16:33.866 "digest": "sha256", 00:16:33.866 "dhgroup": "ffdhe8192" 00:16:33.866 } 00:16:33.866 } 00:16:33.866 ]' 00:16:33.866 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.866 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.866 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.866 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:33.866 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.866 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.866 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.866 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.125 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.695 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.264 00:16:35.264 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.264 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.264 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.524 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.524 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.524 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.524 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.524 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.524 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.524 { 00:16:35.524 "cntlid": 47, 00:16:35.524 "qid": 0, 00:16:35.524 "state": "enabled", 00:16:35.524 "thread": "nvmf_tgt_poll_group_000", 00:16:35.524 "listen_address": { 00:16:35.524 "trtype": "TCP", 00:16:35.524 "adrfam": "IPv4", 00:16:35.524 "traddr": "10.0.0.2", 00:16:35.524 "trsvcid": "4420" 00:16:35.524 }, 00:16:35.524 "peer_address": { 00:16:35.524 "trtype": "TCP", 00:16:35.524 "adrfam": "IPv4", 00:16:35.524 "traddr": "10.0.0.1", 00:16:35.524 "trsvcid": "44312" 00:16:35.524 }, 00:16:35.524 "auth": { 00:16:35.524 "state": "completed", 00:16:35.524 "digest": "sha256", 00:16:35.524 "dhgroup": "ffdhe8192" 00:16:35.524 } 00:16:35.524 } 00:16:35.524 ]' 00:16:35.524 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.524 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.524 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.524 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:35.524 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.524 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.524 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.524 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.783 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:16:36.352 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.352 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.352 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.352 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.352 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.352 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:36.352 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.352 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.352 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:36.352 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:36.611 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:36.611 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.611 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:36.611 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:36.611 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:36.611 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.611 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.611 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.611 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.611 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.611 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.611 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.611 00:16:36.871 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.871 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.871 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.871 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.871 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.871 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.871 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.871 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.871 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.871 { 00:16:36.871 "cntlid": 49, 00:16:36.871 "qid": 0, 00:16:36.871 "state": "enabled", 00:16:36.871 "thread": "nvmf_tgt_poll_group_000", 00:16:36.871 "listen_address": { 00:16:36.871 "trtype": "TCP", 00:16:36.871 "adrfam": "IPv4", 00:16:36.871 "traddr": "10.0.0.2", 00:16:36.871 "trsvcid": "4420" 00:16:36.871 }, 00:16:36.871 "peer_address": { 00:16:36.871 "trtype": "TCP", 00:16:36.871 "adrfam": "IPv4", 00:16:36.871 "traddr": "10.0.0.1", 00:16:36.871 "trsvcid": "44342" 00:16:36.871 }, 00:16:36.871 "auth": { 00:16:36.871 "state": "completed", 00:16:36.871 "digest": "sha384", 00:16:36.871 "dhgroup": "null" 00:16:36.871 } 00:16:36.871 } 00:16:36.871 ]' 00:16:36.871 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.871 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.871 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.131 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:37.131 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.131 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.131 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.131 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.131 19:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:16:37.701 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.701 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.701 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.701 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.701 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.701 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.701 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:37.701 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:37.960 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:37.961 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.961 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:37.961 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:37.961 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:37.961 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.961 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.961 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.961 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.961 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.961 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.961 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.254 00:16:38.254 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.254 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.254 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.520 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.520 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.520 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.520 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.520 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.520 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.520 { 00:16:38.520 "cntlid": 51, 00:16:38.520 "qid": 0, 00:16:38.520 "state": "enabled", 00:16:38.520 "thread": "nvmf_tgt_poll_group_000", 00:16:38.520 "listen_address": { 00:16:38.520 "trtype": "TCP", 00:16:38.520 "adrfam": "IPv4", 00:16:38.520 "traddr": "10.0.0.2", 00:16:38.520 "trsvcid": "4420" 00:16:38.520 }, 00:16:38.520 "peer_address": { 00:16:38.520 "trtype": "TCP", 00:16:38.520 "adrfam": "IPv4", 00:16:38.520 "traddr": "10.0.0.1", 00:16:38.520 "trsvcid": "44360" 00:16:38.520 }, 00:16:38.520 "auth": { 00:16:38.520 "state": "completed", 00:16:38.520 "digest": "sha384", 00:16:38.520 "dhgroup": "null" 00:16:38.520 } 00:16:38.520 } 00:16:38.520 ]' 00:16:38.520 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.520 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.520 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.520 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:38.520 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.520 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.520 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.520 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.779 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.349 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.608 00:16:39.609 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.609 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.609 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.868 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.868 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.868 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.868 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.868 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.868 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.868 { 00:16:39.868 "cntlid": 53, 00:16:39.868 "qid": 0, 00:16:39.868 "state": "enabled", 00:16:39.868 "thread": "nvmf_tgt_poll_group_000", 00:16:39.868 "listen_address": { 00:16:39.868 "trtype": "TCP", 00:16:39.868 "adrfam": "IPv4", 00:16:39.868 "traddr": "10.0.0.2", 00:16:39.868 "trsvcid": "4420" 00:16:39.868 }, 00:16:39.868 "peer_address": { 00:16:39.868 "trtype": "TCP", 00:16:39.868 "adrfam": "IPv4", 00:16:39.868 "traddr": "10.0.0.1", 00:16:39.868 "trsvcid": "44392" 00:16:39.868 }, 00:16:39.868 "auth": { 00:16:39.868 "state": "completed", 00:16:39.868 "digest": "sha384", 00:16:39.868 "dhgroup": "null" 00:16:39.868 } 00:16:39.868 } 00:16:39.868 ]' 00:16:39.868 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.868 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.868 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.868 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:39.868 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.127 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.127 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.127 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.127 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:16:40.696 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.696 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.696 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.696 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.696 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.696 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.696 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:40.696 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:40.956 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:40.956 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.956 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:40.956 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:40.956 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:40.956 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.956 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:40.956 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.956 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.956 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.956 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.956 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.215 00:16:41.215 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.215 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.215 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.215 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.474 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.475 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.475 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.475 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.475 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.475 { 00:16:41.475 "cntlid": 55, 00:16:41.475 "qid": 0, 00:16:41.475 "state": "enabled", 00:16:41.475 "thread": "nvmf_tgt_poll_group_000", 00:16:41.475 "listen_address": { 00:16:41.475 "trtype": "TCP", 00:16:41.475 "adrfam": "IPv4", 00:16:41.475 "traddr": "10.0.0.2", 00:16:41.475 "trsvcid": "4420" 00:16:41.475 }, 00:16:41.475 "peer_address": { 00:16:41.475 "trtype": "TCP", 00:16:41.475 "adrfam": "IPv4", 00:16:41.475 "traddr": "10.0.0.1", 00:16:41.475 "trsvcid": "44408" 00:16:41.475 }, 00:16:41.475 "auth": { 00:16:41.475 "state": "completed", 00:16:41.475 "digest": "sha384", 00:16:41.475 "dhgroup": "null" 00:16:41.475 } 00:16:41.475 } 00:16:41.475 ]' 00:16:41.475 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.475 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.475 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.475 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:41.475 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.475 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.475 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.475 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.734 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.304 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.564 00:16:42.564 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.564 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.564 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.823 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.823 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.823 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.823 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.823 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.824 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.824 { 00:16:42.824 "cntlid": 57, 00:16:42.824 "qid": 0, 00:16:42.824 "state": "enabled", 00:16:42.824 "thread": "nvmf_tgt_poll_group_000", 00:16:42.824 "listen_address": { 00:16:42.824 "trtype": "TCP", 00:16:42.824 "adrfam": "IPv4", 00:16:42.824 "traddr": "10.0.0.2", 00:16:42.824 "trsvcid": "4420" 00:16:42.824 }, 00:16:42.824 "peer_address": { 00:16:42.824 "trtype": "TCP", 00:16:42.824 "adrfam": "IPv4", 00:16:42.824 "traddr": "10.0.0.1", 00:16:42.824 "trsvcid": "44430" 00:16:42.824 }, 00:16:42.824 "auth": { 00:16:42.824 "state": "completed", 00:16:42.824 "digest": "sha384", 00:16:42.824 "dhgroup": "ffdhe2048" 00:16:42.824 } 00:16:42.824 } 00:16:42.824 ]' 00:16:42.824 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.824 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.824 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.824 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:42.824 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.824 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.824 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.824 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.083 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:16:43.652 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.652 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.652 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.652 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.652 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.652 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.652 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:43.652 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:43.912 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:43.912 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.912 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:43.912 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:43.912 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:43.912 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.912 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.912 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.912 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.912 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.912 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.912 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.171 00:16:44.171 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.171 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.171 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.171 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.171 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.171 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.171 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.171 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.171 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.171 { 00:16:44.171 "cntlid": 59, 00:16:44.171 "qid": 0, 00:16:44.171 "state": "enabled", 00:16:44.171 "thread": "nvmf_tgt_poll_group_000", 00:16:44.171 "listen_address": { 00:16:44.171 "trtype": "TCP", 00:16:44.171 "adrfam": "IPv4", 00:16:44.171 "traddr": "10.0.0.2", 00:16:44.171 "trsvcid": "4420" 00:16:44.171 }, 00:16:44.171 "peer_address": { 00:16:44.171 "trtype": "TCP", 00:16:44.171 "adrfam": "IPv4", 00:16:44.171 "traddr": "10.0.0.1", 00:16:44.171 "trsvcid": "44450" 00:16:44.171 }, 00:16:44.171 "auth": { 00:16:44.171 "state": "completed", 00:16:44.171 "digest": "sha384", 00:16:44.171 "dhgroup": "ffdhe2048" 00:16:44.171 } 00:16:44.171 } 00:16:44.171 ]' 00:16:44.171 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.431 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.431 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.431 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:44.431 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.431 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.431 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.431 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.690 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:16:45.258 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.259 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.518 00:16:45.518 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.518 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.518 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.777 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.777 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.777 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.777 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.777 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.777 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.777 { 00:16:45.777 "cntlid": 61, 00:16:45.777 "qid": 0, 00:16:45.777 "state": "enabled", 00:16:45.777 "thread": "nvmf_tgt_poll_group_000", 00:16:45.777 "listen_address": { 00:16:45.777 "trtype": "TCP", 00:16:45.777 "adrfam": "IPv4", 00:16:45.777 "traddr": "10.0.0.2", 00:16:45.777 "trsvcid": "4420" 00:16:45.777 }, 00:16:45.777 "peer_address": { 00:16:45.777 "trtype": "TCP", 00:16:45.777 "adrfam": "IPv4", 00:16:45.777 "traddr": "10.0.0.1", 00:16:45.777 "trsvcid": "59636" 00:16:45.777 }, 00:16:45.777 "auth": { 00:16:45.777 "state": "completed", 00:16:45.777 "digest": "sha384", 00:16:45.777 "dhgroup": "ffdhe2048" 00:16:45.777 } 00:16:45.777 } 00:16:45.777 ]' 00:16:45.777 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.777 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.777 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.777 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.777 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.777 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.777 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.777 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.037 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:16:46.606 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.606 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.606 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.606 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.606 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.606 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.606 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:46.606 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:46.865 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:46.865 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.865 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:46.865 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:46.865 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:46.865 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.865 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:46.865 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.865 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.865 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.865 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.865 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.125 00:16:47.125 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.125 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.125 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.125 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.125 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.125 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.125 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.125 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.125 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.125 { 00:16:47.125 "cntlid": 63, 00:16:47.125 "qid": 0, 00:16:47.125 "state": "enabled", 00:16:47.125 "thread": "nvmf_tgt_poll_group_000", 00:16:47.125 "listen_address": { 00:16:47.125 "trtype": "TCP", 00:16:47.125 "adrfam": "IPv4", 00:16:47.125 "traddr": "10.0.0.2", 00:16:47.125 "trsvcid": "4420" 00:16:47.125 }, 00:16:47.125 "peer_address": { 00:16:47.125 "trtype": "TCP", 00:16:47.125 "adrfam": "IPv4", 00:16:47.125 "traddr": "10.0.0.1", 00:16:47.125 "trsvcid": "59654" 00:16:47.125 }, 00:16:47.125 "auth": { 00:16:47.125 "state": "completed", 00:16:47.125 "digest": "sha384", 00:16:47.125 "dhgroup": "ffdhe2048" 00:16:47.125 } 00:16:47.125 } 00:16:47.125 ]' 00:16:47.125 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.385 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.385 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.385 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:47.385 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.385 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.385 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.385 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.644 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:16:48.213 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.213 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.213 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.213 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.213 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.213 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.213 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.213 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:48.213 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:48.213 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:48.213 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.213 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:48.213 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:48.214 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:48.214 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.214 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.214 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.214 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.214 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.214 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.214 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.473 00:16:48.473 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.473 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.473 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.732 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.732 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.732 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.732 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.732 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.732 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.732 { 00:16:48.732 "cntlid": 65, 00:16:48.732 "qid": 0, 00:16:48.732 "state": "enabled", 00:16:48.732 "thread": "nvmf_tgt_poll_group_000", 00:16:48.732 "listen_address": { 00:16:48.732 "trtype": "TCP", 00:16:48.732 "adrfam": "IPv4", 00:16:48.732 "traddr": "10.0.0.2", 00:16:48.732 "trsvcid": "4420" 00:16:48.732 }, 00:16:48.732 "peer_address": { 00:16:48.732 "trtype": "TCP", 00:16:48.732 "adrfam": "IPv4", 00:16:48.732 "traddr": "10.0.0.1", 00:16:48.732 "trsvcid": "59694" 00:16:48.732 }, 00:16:48.732 "auth": { 00:16:48.732 "state": "completed", 00:16:48.732 "digest": "sha384", 00:16:48.732 "dhgroup": "ffdhe3072" 00:16:48.732 } 00:16:48.732 } 00:16:48.732 ]' 00:16:48.732 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.732 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.732 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.732 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.732 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.991 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.991 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.991 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.991 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:16:49.560 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.560 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.560 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.560 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.560 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.560 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.560 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:49.560 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:49.819 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:49.819 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.819 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:49.820 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:49.820 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:49.820 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.820 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.820 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.820 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.820 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.820 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.820 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.078 00:16:50.078 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.078 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.079 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.337 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.337 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.337 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.337 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.337 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.337 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.337 { 00:16:50.337 "cntlid": 67, 00:16:50.337 "qid": 0, 00:16:50.337 "state": "enabled", 00:16:50.337 "thread": "nvmf_tgt_poll_group_000", 00:16:50.337 "listen_address": { 00:16:50.337 "trtype": "TCP", 00:16:50.337 "adrfam": "IPv4", 00:16:50.337 "traddr": "10.0.0.2", 00:16:50.337 "trsvcid": "4420" 00:16:50.337 }, 00:16:50.337 "peer_address": { 00:16:50.337 "trtype": "TCP", 00:16:50.337 "adrfam": "IPv4", 00:16:50.337 "traddr": "10.0.0.1", 00:16:50.337 "trsvcid": "59706" 00:16:50.337 }, 00:16:50.337 "auth": { 00:16:50.337 "state": "completed", 00:16:50.337 "digest": "sha384", 00:16:50.337 "dhgroup": "ffdhe3072" 00:16:50.337 } 00:16:50.337 } 00:16:50.337 ]' 00:16:50.337 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.337 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.337 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.337 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.337 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.337 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.337 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.337 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.595 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.162 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.420 00:16:51.420 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.420 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.420 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.678 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.678 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.678 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.678 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.678 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.678 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.678 { 00:16:51.678 "cntlid": 69, 00:16:51.678 "qid": 0, 00:16:51.678 "state": "enabled", 00:16:51.678 "thread": "nvmf_tgt_poll_group_000", 00:16:51.678 "listen_address": { 00:16:51.678 "trtype": "TCP", 00:16:51.678 "adrfam": "IPv4", 00:16:51.678 "traddr": "10.0.0.2", 00:16:51.678 "trsvcid": "4420" 00:16:51.678 }, 00:16:51.678 "peer_address": { 00:16:51.678 "trtype": "TCP", 00:16:51.678 "adrfam": "IPv4", 00:16:51.678 "traddr": "10.0.0.1", 00:16:51.678 "trsvcid": "59750" 00:16:51.678 }, 00:16:51.678 "auth": { 00:16:51.678 "state": "completed", 00:16:51.678 "digest": "sha384", 00:16:51.678 "dhgroup": "ffdhe3072" 00:16:51.678 } 00:16:51.678 } 00:16:51.678 ]' 00:16:51.678 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.678 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.678 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.678 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.937 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.937 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.937 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.937 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.937 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:16:52.540 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.540 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.540 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.540 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.540 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.540 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.540 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:52.540 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:52.799 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:16:52.799 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.799 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:52.799 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:52.799 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:52.799 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.799 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:52.799 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.799 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.799 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.799 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.799 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.059 00:16:53.059 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.059 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.059 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.059 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.059 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.059 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.059 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.318 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.318 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.318 { 00:16:53.318 "cntlid": 71, 00:16:53.318 "qid": 0, 00:16:53.318 "state": "enabled", 00:16:53.318 "thread": "nvmf_tgt_poll_group_000", 00:16:53.318 "listen_address": { 00:16:53.318 "trtype": "TCP", 00:16:53.318 "adrfam": "IPv4", 00:16:53.318 "traddr": "10.0.0.2", 00:16:53.318 "trsvcid": "4420" 00:16:53.318 }, 00:16:53.318 "peer_address": { 00:16:53.318 "trtype": "TCP", 00:16:53.318 "adrfam": "IPv4", 00:16:53.318 "traddr": "10.0.0.1", 00:16:53.318 "trsvcid": "59766" 00:16:53.318 }, 00:16:53.318 "auth": { 00:16:53.318 "state": "completed", 00:16:53.318 "digest": "sha384", 00:16:53.318 "dhgroup": "ffdhe3072" 00:16:53.318 } 00:16:53.318 } 00:16:53.318 ]' 00:16:53.318 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.318 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.318 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.318 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:53.318 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.318 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.318 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.318 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.577 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:16:54.144 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.145 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.404 00:16:54.404 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.404 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.404 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.664 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.664 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.664 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.664 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.664 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.664 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.664 { 00:16:54.664 "cntlid": 73, 00:16:54.664 "qid": 0, 00:16:54.664 "state": "enabled", 00:16:54.664 "thread": "nvmf_tgt_poll_group_000", 00:16:54.664 "listen_address": { 00:16:54.664 "trtype": "TCP", 00:16:54.664 "adrfam": "IPv4", 00:16:54.664 "traddr": "10.0.0.2", 00:16:54.664 "trsvcid": "4420" 00:16:54.664 }, 00:16:54.664 "peer_address": { 00:16:54.664 "trtype": "TCP", 00:16:54.664 "adrfam": "IPv4", 00:16:54.664 "traddr": "10.0.0.1", 00:16:54.664 "trsvcid": "59792" 00:16:54.664 }, 00:16:54.664 "auth": { 00:16:54.664 "state": "completed", 00:16:54.664 "digest": "sha384", 00:16:54.664 "dhgroup": "ffdhe4096" 00:16:54.664 } 00:16:54.664 } 00:16:54.664 ]' 00:16:54.664 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.664 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.664 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.664 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.664 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.923 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.923 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.923 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.923 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:16:55.491 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.491 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.491 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.491 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.491 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.491 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.491 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:55.491 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:55.750 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:16:55.750 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.750 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:55.750 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:55.750 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:55.750 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.750 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.750 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.750 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.750 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.750 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.750 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.008 00:16:56.008 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.008 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.008 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.267 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.267 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.267 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.267 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.267 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.267 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.267 { 00:16:56.267 "cntlid": 75, 00:16:56.267 "qid": 0, 00:16:56.267 "state": "enabled", 00:16:56.267 "thread": "nvmf_tgt_poll_group_000", 00:16:56.267 "listen_address": { 00:16:56.267 "trtype": "TCP", 00:16:56.267 "adrfam": "IPv4", 00:16:56.267 "traddr": "10.0.0.2", 00:16:56.267 "trsvcid": "4420" 00:16:56.267 }, 00:16:56.267 "peer_address": { 00:16:56.267 "trtype": "TCP", 00:16:56.267 "adrfam": "IPv4", 00:16:56.267 "traddr": "10.0.0.1", 00:16:56.267 "trsvcid": "38270" 00:16:56.267 }, 00:16:56.267 "auth": { 00:16:56.267 "state": "completed", 00:16:56.267 "digest": "sha384", 00:16:56.267 "dhgroup": "ffdhe4096" 00:16:56.267 } 00:16:56.267 } 00:16:56.267 ]' 00:16:56.268 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.268 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.268 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.268 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:56.268 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.268 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.268 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.268 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.527 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:16:57.094 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.094 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.094 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.094 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.094 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.094 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.094 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:57.094 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:57.353 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:16:57.353 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.353 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:57.353 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:57.353 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:57.353 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.353 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.353 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.353 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.353 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.353 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.353 19:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.613 00:16:57.613 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.613 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.613 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.613 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.613 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.613 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.613 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.613 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.613 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.613 { 00:16:57.613 "cntlid": 77, 00:16:57.613 "qid": 0, 00:16:57.613 "state": "enabled", 00:16:57.613 "thread": "nvmf_tgt_poll_group_000", 00:16:57.613 "listen_address": { 00:16:57.613 "trtype": "TCP", 00:16:57.613 "adrfam": "IPv4", 00:16:57.613 "traddr": "10.0.0.2", 00:16:57.613 "trsvcid": "4420" 00:16:57.613 }, 00:16:57.613 "peer_address": { 00:16:57.613 "trtype": "TCP", 00:16:57.613 "adrfam": "IPv4", 00:16:57.613 "traddr": "10.0.0.1", 00:16:57.613 "trsvcid": "38296" 00:16:57.613 }, 00:16:57.613 "auth": { 00:16:57.613 "state": "completed", 00:16:57.613 "digest": "sha384", 00:16:57.613 "dhgroup": "ffdhe4096" 00:16:57.613 } 00:16:57.613 } 00:16:57.613 ]' 00:16:57.613 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.871 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.871 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.871 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.871 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.871 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.871 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.871 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.129 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:16:58.698 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.698 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.698 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.698 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.698 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.698 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.698 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:58.698 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:58.698 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:58.699 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.699 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:58.699 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:58.699 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:58.699 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.699 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:58.699 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.699 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.699 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.699 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.699 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.958 00:16:58.958 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.958 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.958 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.217 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.217 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.217 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.217 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.217 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.217 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.217 { 00:16:59.217 "cntlid": 79, 00:16:59.217 "qid": 0, 00:16:59.217 "state": "enabled", 00:16:59.217 "thread": "nvmf_tgt_poll_group_000", 00:16:59.217 "listen_address": { 00:16:59.217 "trtype": "TCP", 00:16:59.217 "adrfam": "IPv4", 00:16:59.217 "traddr": "10.0.0.2", 00:16:59.217 "trsvcid": "4420" 00:16:59.217 }, 00:16:59.217 "peer_address": { 00:16:59.217 "trtype": "TCP", 00:16:59.217 "adrfam": "IPv4", 00:16:59.217 "traddr": "10.0.0.1", 00:16:59.217 "trsvcid": "38332" 00:16:59.217 }, 00:16:59.217 "auth": { 00:16:59.217 "state": "completed", 00:16:59.217 "digest": "sha384", 00:16:59.217 "dhgroup": "ffdhe4096" 00:16:59.217 } 00:16:59.217 } 00:16:59.217 ]' 00:16:59.217 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.218 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.218 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.218 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.218 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.477 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.477 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.477 19:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.477 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:17:00.045 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.045 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.045 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.045 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.045 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.045 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.045 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.045 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:00.045 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:00.305 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:00.305 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.305 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:00.305 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:00.305 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:00.305 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.305 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.305 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.305 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.305 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.305 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.305 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.564 00:17:00.564 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.564 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.564 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.823 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.823 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.823 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.823 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.823 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.823 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.823 { 00:17:00.823 "cntlid": 81, 00:17:00.823 "qid": 0, 00:17:00.823 "state": "enabled", 00:17:00.823 "thread": "nvmf_tgt_poll_group_000", 00:17:00.823 "listen_address": { 00:17:00.823 "trtype": "TCP", 00:17:00.823 "adrfam": "IPv4", 00:17:00.823 "traddr": "10.0.0.2", 00:17:00.823 "trsvcid": "4420" 00:17:00.823 }, 00:17:00.823 "peer_address": { 00:17:00.823 "trtype": "TCP", 00:17:00.823 "adrfam": "IPv4", 00:17:00.823 "traddr": "10.0.0.1", 00:17:00.823 "trsvcid": "38354" 00:17:00.823 }, 00:17:00.823 "auth": { 00:17:00.823 "state": "completed", 00:17:00.823 "digest": "sha384", 00:17:00.823 "dhgroup": "ffdhe6144" 00:17:00.823 } 00:17:00.823 } 00:17:00.823 ]' 00:17:00.823 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.823 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.823 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.823 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:00.823 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.823 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.823 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.823 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.082 19:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:17:01.650 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.650 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.650 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.650 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.650 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.650 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.650 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:01.650 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:01.909 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:01.909 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.909 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:01.909 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:01.909 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:01.909 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.909 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.909 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.909 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.910 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.910 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.910 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.216 00:17:02.216 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.216 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.216 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.475 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.475 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.475 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.475 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.475 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.475 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.475 { 00:17:02.475 "cntlid": 83, 00:17:02.475 "qid": 0, 00:17:02.475 "state": "enabled", 00:17:02.475 "thread": "nvmf_tgt_poll_group_000", 00:17:02.475 "listen_address": { 00:17:02.475 "trtype": "TCP", 00:17:02.475 "adrfam": "IPv4", 00:17:02.475 "traddr": "10.0.0.2", 00:17:02.475 "trsvcid": "4420" 00:17:02.475 }, 00:17:02.475 "peer_address": { 00:17:02.475 "trtype": "TCP", 00:17:02.475 "adrfam": "IPv4", 00:17:02.475 "traddr": "10.0.0.1", 00:17:02.475 "trsvcid": "38372" 00:17:02.475 }, 00:17:02.475 "auth": { 00:17:02.475 "state": "completed", 00:17:02.475 "digest": "sha384", 00:17:02.475 "dhgroup": "ffdhe6144" 00:17:02.475 } 00:17:02.475 } 00:17:02.475 ]' 00:17:02.475 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.475 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.475 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.475 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:02.475 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.475 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.475 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.475 19:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.734 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.302 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.303 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.303 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.870 00:17:03.870 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.870 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.870 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.870 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.870 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.870 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.870 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.870 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.870 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.870 { 00:17:03.870 "cntlid": 85, 00:17:03.870 "qid": 0, 00:17:03.870 "state": "enabled", 00:17:03.870 "thread": "nvmf_tgt_poll_group_000", 00:17:03.870 "listen_address": { 00:17:03.870 "trtype": "TCP", 00:17:03.870 "adrfam": "IPv4", 00:17:03.870 "traddr": "10.0.0.2", 00:17:03.870 "trsvcid": "4420" 00:17:03.870 }, 00:17:03.870 "peer_address": { 00:17:03.870 "trtype": "TCP", 00:17:03.870 "adrfam": "IPv4", 00:17:03.870 "traddr": "10.0.0.1", 00:17:03.870 "trsvcid": "38398" 00:17:03.870 }, 00:17:03.870 "auth": { 00:17:03.870 "state": "completed", 00:17:03.870 "digest": "sha384", 00:17:03.870 "dhgroup": "ffdhe6144" 00:17:03.870 } 00:17:03.870 } 00:17:03.870 ]' 00:17:03.871 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.871 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.871 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.130 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:04.130 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.130 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.130 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.130 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.130 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:17:04.698 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.698 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.698 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.698 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.698 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.698 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.698 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:04.698 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:04.957 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:04.957 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.957 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:04.957 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:04.957 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:04.957 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.957 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:04.957 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.957 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.957 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.957 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.957 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.216 00:17:05.216 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.216 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.216 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.475 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.475 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.475 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.475 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.475 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.475 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.475 { 00:17:05.475 "cntlid": 87, 00:17:05.475 "qid": 0, 00:17:05.475 "state": "enabled", 00:17:05.475 "thread": "nvmf_tgt_poll_group_000", 00:17:05.475 "listen_address": { 00:17:05.475 "trtype": "TCP", 00:17:05.475 "adrfam": "IPv4", 00:17:05.475 "traddr": "10.0.0.2", 00:17:05.475 "trsvcid": "4420" 00:17:05.475 }, 00:17:05.475 "peer_address": { 00:17:05.475 "trtype": "TCP", 00:17:05.475 "adrfam": "IPv4", 00:17:05.475 "traddr": "10.0.0.1", 00:17:05.475 "trsvcid": "50946" 00:17:05.475 }, 00:17:05.475 "auth": { 00:17:05.475 "state": "completed", 00:17:05.475 "digest": "sha384", 00:17:05.475 "dhgroup": "ffdhe6144" 00:17:05.475 } 00:17:05.475 } 00:17:05.475 ]' 00:17:05.475 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.475 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.475 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.475 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.475 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.733 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.733 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.733 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.733 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:17:06.302 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.302 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.302 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.302 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.302 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.302 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.302 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.302 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:06.302 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:06.561 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:06.561 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.561 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.561 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:06.561 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:06.561 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.561 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.561 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.561 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.561 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.561 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.561 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.162 00:17:07.162 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.162 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.162 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.162 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.162 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.162 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.162 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.162 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.162 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.162 { 00:17:07.162 "cntlid": 89, 00:17:07.162 "qid": 0, 00:17:07.162 "state": "enabled", 00:17:07.162 "thread": "nvmf_tgt_poll_group_000", 00:17:07.162 "listen_address": { 00:17:07.162 "trtype": "TCP", 00:17:07.162 "adrfam": "IPv4", 00:17:07.162 "traddr": "10.0.0.2", 00:17:07.162 "trsvcid": "4420" 00:17:07.162 }, 00:17:07.162 "peer_address": { 00:17:07.162 "trtype": "TCP", 00:17:07.162 "adrfam": "IPv4", 00:17:07.162 "traddr": "10.0.0.1", 00:17:07.162 "trsvcid": "50982" 00:17:07.162 }, 00:17:07.162 "auth": { 00:17:07.162 "state": "completed", 00:17:07.162 "digest": "sha384", 00:17:07.162 "dhgroup": "ffdhe8192" 00:17:07.162 } 00:17:07.162 } 00:17:07.162 ]' 00:17:07.162 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.162 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.162 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.422 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.422 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.422 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.422 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.422 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.422 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:17:07.990 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.990 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.990 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.990 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.990 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.990 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.990 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:07.990 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:08.249 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:08.249 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.249 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.249 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:08.249 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:08.249 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.249 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.249 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.249 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.249 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.249 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.249 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.819 00:17:08.819 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.819 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.819 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.819 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.819 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.819 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.819 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.819 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.819 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.819 { 00:17:08.819 "cntlid": 91, 00:17:08.819 "qid": 0, 00:17:08.819 "state": "enabled", 00:17:08.819 "thread": "nvmf_tgt_poll_group_000", 00:17:08.819 "listen_address": { 00:17:08.819 "trtype": "TCP", 00:17:08.819 "adrfam": "IPv4", 00:17:08.819 "traddr": "10.0.0.2", 00:17:08.819 "trsvcid": "4420" 00:17:08.819 }, 00:17:08.819 "peer_address": { 00:17:08.819 "trtype": "TCP", 00:17:08.819 "adrfam": "IPv4", 00:17:08.819 "traddr": "10.0.0.1", 00:17:08.819 "trsvcid": "51008" 00:17:08.819 }, 00:17:08.819 "auth": { 00:17:08.819 "state": "completed", 00:17:08.819 "digest": "sha384", 00:17:08.819 "dhgroup": "ffdhe8192" 00:17:08.819 } 00:17:08.819 } 00:17:08.819 ]' 00:17:08.819 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.079 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.079 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.079 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:09.079 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.079 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.079 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.079 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.339 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:17:09.908 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.909 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.479 00:17:10.479 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.479 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.479 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.739 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.739 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.739 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.739 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.739 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.739 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.739 { 00:17:10.739 "cntlid": 93, 00:17:10.739 "qid": 0, 00:17:10.739 "state": "enabled", 00:17:10.739 "thread": "nvmf_tgt_poll_group_000", 00:17:10.739 "listen_address": { 00:17:10.739 "trtype": "TCP", 00:17:10.739 "adrfam": "IPv4", 00:17:10.739 "traddr": "10.0.0.2", 00:17:10.739 "trsvcid": "4420" 00:17:10.739 }, 00:17:10.739 "peer_address": { 00:17:10.739 "trtype": "TCP", 00:17:10.739 "adrfam": "IPv4", 00:17:10.739 "traddr": "10.0.0.1", 00:17:10.739 "trsvcid": "51036" 00:17:10.739 }, 00:17:10.739 "auth": { 00:17:10.739 "state": "completed", 00:17:10.739 "digest": "sha384", 00:17:10.739 "dhgroup": "ffdhe8192" 00:17:10.739 } 00:17:10.739 } 00:17:10.739 ]' 00:17:10.739 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.739 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.739 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.739 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.739 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.739 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.739 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.739 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.999 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:17:11.567 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.567 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.567 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.567 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.567 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.567 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.567 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:11.567 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:11.567 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:11.567 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.567 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:11.567 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:11.567 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:11.567 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.567 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:11.567 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.567 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.567 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.567 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.567 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.138 00:17:12.138 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.138 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.138 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.398 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.398 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.398 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.398 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.398 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.398 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.398 { 00:17:12.398 "cntlid": 95, 00:17:12.398 "qid": 0, 00:17:12.398 "state": "enabled", 00:17:12.398 "thread": "nvmf_tgt_poll_group_000", 00:17:12.398 "listen_address": { 00:17:12.398 "trtype": "TCP", 00:17:12.398 "adrfam": "IPv4", 00:17:12.398 "traddr": "10.0.0.2", 00:17:12.398 "trsvcid": "4420" 00:17:12.398 }, 00:17:12.398 "peer_address": { 00:17:12.398 "trtype": "TCP", 00:17:12.398 "adrfam": "IPv4", 00:17:12.398 "traddr": "10.0.0.1", 00:17:12.398 "trsvcid": "51058" 00:17:12.398 }, 00:17:12.398 "auth": { 00:17:12.398 "state": "completed", 00:17:12.398 "digest": "sha384", 00:17:12.398 "dhgroup": "ffdhe8192" 00:17:12.398 } 00:17:12.398 } 00:17:12.398 ]' 00:17:12.398 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.398 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.398 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.398 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.398 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.398 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.398 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.398 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.659 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:17:13.228 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.228 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.228 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.228 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.228 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.228 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:13.228 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.228 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.228 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:13.228 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:13.228 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:13.228 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.228 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:13.228 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:13.229 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.229 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.229 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.229 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.229 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.491 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.491 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.491 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.491 00:17:13.491 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.491 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.491 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.751 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.751 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.751 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.751 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.751 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.751 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.751 { 00:17:13.751 "cntlid": 97, 00:17:13.751 "qid": 0, 00:17:13.751 "state": "enabled", 00:17:13.751 "thread": "nvmf_tgt_poll_group_000", 00:17:13.751 "listen_address": { 00:17:13.751 "trtype": "TCP", 00:17:13.751 "adrfam": "IPv4", 00:17:13.751 "traddr": "10.0.0.2", 00:17:13.751 "trsvcid": "4420" 00:17:13.751 }, 00:17:13.751 "peer_address": { 00:17:13.751 "trtype": "TCP", 00:17:13.751 "adrfam": "IPv4", 00:17:13.751 "traddr": "10.0.0.1", 00:17:13.751 "trsvcid": "51084" 00:17:13.751 }, 00:17:13.751 "auth": { 00:17:13.751 "state": "completed", 00:17:13.751 "digest": "sha512", 00:17:13.751 "dhgroup": "null" 00:17:13.751 } 00:17:13.751 } 00:17:13.751 ]' 00:17:13.751 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.751 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.751 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.011 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:14.011 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.011 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.011 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.011 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.011 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:17:14.580 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.580 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.580 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.580 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.580 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.580 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.580 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:14.580 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:14.840 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:14.840 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.840 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:14.840 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:14.840 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:14.840 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.840 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.840 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.840 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.840 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.840 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.840 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.099 00:17:15.099 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.099 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.099 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.360 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.360 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.360 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.360 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.360 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.360 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.360 { 00:17:15.360 "cntlid": 99, 00:17:15.360 "qid": 0, 00:17:15.360 "state": "enabled", 00:17:15.360 "thread": "nvmf_tgt_poll_group_000", 00:17:15.360 "listen_address": { 00:17:15.360 "trtype": "TCP", 00:17:15.360 "adrfam": "IPv4", 00:17:15.360 "traddr": "10.0.0.2", 00:17:15.360 "trsvcid": "4420" 00:17:15.360 }, 00:17:15.360 "peer_address": { 00:17:15.360 "trtype": "TCP", 00:17:15.360 "adrfam": "IPv4", 00:17:15.360 "traddr": "10.0.0.1", 00:17:15.360 "trsvcid": "34542" 00:17:15.360 }, 00:17:15.360 "auth": { 00:17:15.360 "state": "completed", 00:17:15.360 "digest": "sha512", 00:17:15.360 "dhgroup": "null" 00:17:15.360 } 00:17:15.360 } 00:17:15.360 ]' 00:17:15.360 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.360 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.360 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.360 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:15.360 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.360 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.360 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.360 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.620 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:17:16.189 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.189 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.189 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.189 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.190 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.190 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.190 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:16.190 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:16.190 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:16.190 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.190 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:16.190 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:16.190 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:16.190 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.190 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.190 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.190 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.190 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.190 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.190 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.449 00:17:16.449 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.449 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.449 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.709 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.709 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.709 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.709 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.709 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.709 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.709 { 00:17:16.709 "cntlid": 101, 00:17:16.709 "qid": 0, 00:17:16.709 "state": "enabled", 00:17:16.709 "thread": "nvmf_tgt_poll_group_000", 00:17:16.709 "listen_address": { 00:17:16.709 "trtype": "TCP", 00:17:16.709 "adrfam": "IPv4", 00:17:16.709 "traddr": "10.0.0.2", 00:17:16.709 "trsvcid": "4420" 00:17:16.709 }, 00:17:16.709 "peer_address": { 00:17:16.709 "trtype": "TCP", 00:17:16.709 "adrfam": "IPv4", 00:17:16.709 "traddr": "10.0.0.1", 00:17:16.709 "trsvcid": "34556" 00:17:16.709 }, 00:17:16.709 "auth": { 00:17:16.709 "state": "completed", 00:17:16.709 "digest": "sha512", 00:17:16.709 "dhgroup": "null" 00:17:16.709 } 00:17:16.709 } 00:17:16.709 ]' 00:17:16.709 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.709 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.709 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.709 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:16.709 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.709 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.709 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.709 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.968 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:17:17.536 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.536 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.536 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.536 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.536 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.536 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.536 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:17.536 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:17.796 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:17.796 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.796 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:17.796 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:17.796 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:17.796 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.796 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:17.796 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.796 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.796 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.796 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.796 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.796 00:17:18.055 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.055 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.055 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.055 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.055 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.055 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.055 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.055 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.055 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.056 { 00:17:18.056 "cntlid": 103, 00:17:18.056 "qid": 0, 00:17:18.056 "state": "enabled", 00:17:18.056 "thread": "nvmf_tgt_poll_group_000", 00:17:18.056 "listen_address": { 00:17:18.056 "trtype": "TCP", 00:17:18.056 "adrfam": "IPv4", 00:17:18.056 "traddr": "10.0.0.2", 00:17:18.056 "trsvcid": "4420" 00:17:18.056 }, 00:17:18.056 "peer_address": { 00:17:18.056 "trtype": "TCP", 00:17:18.056 "adrfam": "IPv4", 00:17:18.056 "traddr": "10.0.0.1", 00:17:18.056 "trsvcid": "34574" 00:17:18.056 }, 00:17:18.056 "auth": { 00:17:18.056 "state": "completed", 00:17:18.056 "digest": "sha512", 00:17:18.056 "dhgroup": "null" 00:17:18.056 } 00:17:18.056 } 00:17:18.056 ]' 00:17:18.056 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.056 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.056 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.315 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:18.315 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.315 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.315 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.315 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.315 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:17:18.884 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.884 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.884 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.884 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.884 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.884 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.884 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.884 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:18.884 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:19.143 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:19.144 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.144 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:19.144 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:19.144 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:19.144 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.144 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.144 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.144 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.144 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.144 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.144 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.404 00:17:19.404 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.404 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.404 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.664 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.664 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.664 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.664 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.664 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.664 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.664 { 00:17:19.664 "cntlid": 105, 00:17:19.664 "qid": 0, 00:17:19.664 "state": "enabled", 00:17:19.664 "thread": "nvmf_tgt_poll_group_000", 00:17:19.664 "listen_address": { 00:17:19.664 "trtype": "TCP", 00:17:19.664 "adrfam": "IPv4", 00:17:19.664 "traddr": "10.0.0.2", 00:17:19.664 "trsvcid": "4420" 00:17:19.664 }, 00:17:19.664 "peer_address": { 00:17:19.664 "trtype": "TCP", 00:17:19.664 "adrfam": "IPv4", 00:17:19.664 "traddr": "10.0.0.1", 00:17:19.664 "trsvcid": "34618" 00:17:19.664 }, 00:17:19.664 "auth": { 00:17:19.664 "state": "completed", 00:17:19.664 "digest": "sha512", 00:17:19.664 "dhgroup": "ffdhe2048" 00:17:19.664 } 00:17:19.664 } 00:17:19.664 ]' 00:17:19.664 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.664 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.664 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.664 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:19.664 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.664 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.664 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.664 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.924 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:17:20.493 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.493 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.493 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.493 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.493 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.493 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.493 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:20.493 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:20.493 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:20.493 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.493 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:20.493 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:20.493 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:20.493 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.493 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.493 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.493 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.493 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.493 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.493 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.752 00:17:20.752 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.752 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.752 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.012 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.012 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.012 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.012 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.012 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.012 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.012 { 00:17:21.012 "cntlid": 107, 00:17:21.012 "qid": 0, 00:17:21.012 "state": "enabled", 00:17:21.012 "thread": "nvmf_tgt_poll_group_000", 00:17:21.012 "listen_address": { 00:17:21.012 "trtype": "TCP", 00:17:21.012 "adrfam": "IPv4", 00:17:21.012 "traddr": "10.0.0.2", 00:17:21.012 "trsvcid": "4420" 00:17:21.012 }, 00:17:21.012 "peer_address": { 00:17:21.012 "trtype": "TCP", 00:17:21.012 "adrfam": "IPv4", 00:17:21.012 "traddr": "10.0.0.1", 00:17:21.012 "trsvcid": "34632" 00:17:21.012 }, 00:17:21.012 "auth": { 00:17:21.012 "state": "completed", 00:17:21.012 "digest": "sha512", 00:17:21.012 "dhgroup": "ffdhe2048" 00:17:21.012 } 00:17:21.012 } 00:17:21.012 ]' 00:17:21.012 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.012 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.012 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.012 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.012 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.012 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.012 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.012 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.310 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:17:21.892 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.892 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.892 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.892 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.892 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.892 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.893 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:21.893 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:22.151 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:22.151 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.151 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:22.151 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:22.151 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:22.151 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.151 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.151 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.151 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.151 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.151 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.151 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.411 00:17:22.411 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.411 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.411 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.411 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.411 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.411 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.411 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.411 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.411 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.411 { 00:17:22.411 "cntlid": 109, 00:17:22.411 "qid": 0, 00:17:22.411 "state": "enabled", 00:17:22.411 "thread": "nvmf_tgt_poll_group_000", 00:17:22.411 "listen_address": { 00:17:22.411 "trtype": "TCP", 00:17:22.411 "adrfam": "IPv4", 00:17:22.411 "traddr": "10.0.0.2", 00:17:22.411 "trsvcid": "4420" 00:17:22.411 }, 00:17:22.411 "peer_address": { 00:17:22.411 "trtype": "TCP", 00:17:22.411 "adrfam": "IPv4", 00:17:22.411 "traddr": "10.0.0.1", 00:17:22.411 "trsvcid": "34658" 00:17:22.411 }, 00:17:22.411 "auth": { 00:17:22.412 "state": "completed", 00:17:22.412 "digest": "sha512", 00:17:22.412 "dhgroup": "ffdhe2048" 00:17:22.412 } 00:17:22.412 } 00:17:22.412 ]' 00:17:22.412 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.412 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.412 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.671 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:22.671 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.671 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.671 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.671 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.671 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:17:23.240 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.240 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.240 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.240 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.240 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.240 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.240 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:23.241 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:23.501 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:23.501 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.501 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:23.501 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:23.501 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:23.501 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.501 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:23.501 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.501 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.501 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.501 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.501 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.761 00:17:23.761 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.761 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.761 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.021 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.021 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.021 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.021 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.021 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.021 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.021 { 00:17:24.021 "cntlid": 111, 00:17:24.021 "qid": 0, 00:17:24.021 "state": "enabled", 00:17:24.021 "thread": "nvmf_tgt_poll_group_000", 00:17:24.021 "listen_address": { 00:17:24.021 "trtype": "TCP", 00:17:24.021 "adrfam": "IPv4", 00:17:24.021 "traddr": "10.0.0.2", 00:17:24.021 "trsvcid": "4420" 00:17:24.021 }, 00:17:24.021 "peer_address": { 00:17:24.021 "trtype": "TCP", 00:17:24.021 "adrfam": "IPv4", 00:17:24.021 "traddr": "10.0.0.1", 00:17:24.021 "trsvcid": "34682" 00:17:24.021 }, 00:17:24.021 "auth": { 00:17:24.021 "state": "completed", 00:17:24.021 "digest": "sha512", 00:17:24.021 "dhgroup": "ffdhe2048" 00:17:24.021 } 00:17:24.021 } 00:17:24.021 ]' 00:17:24.021 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.021 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.021 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.021 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.021 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.021 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.021 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.021 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.280 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:17:24.849 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.849 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.849 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.849 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.849 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.849 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.849 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.849 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:24.849 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:25.108 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:25.108 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.108 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:25.108 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:25.108 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:25.108 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.108 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.108 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.108 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.108 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.108 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.108 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.367 00:17:25.367 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.367 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.367 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.367 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.367 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.367 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.367 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.367 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.367 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.367 { 00:17:25.367 "cntlid": 113, 00:17:25.367 "qid": 0, 00:17:25.367 "state": "enabled", 00:17:25.367 "thread": "nvmf_tgt_poll_group_000", 00:17:25.367 "listen_address": { 00:17:25.367 "trtype": "TCP", 00:17:25.367 "adrfam": "IPv4", 00:17:25.367 "traddr": "10.0.0.2", 00:17:25.367 "trsvcid": "4420" 00:17:25.367 }, 00:17:25.367 "peer_address": { 00:17:25.367 "trtype": "TCP", 00:17:25.367 "adrfam": "IPv4", 00:17:25.367 "traddr": "10.0.0.1", 00:17:25.368 "trsvcid": "40906" 00:17:25.368 }, 00:17:25.368 "auth": { 00:17:25.368 "state": "completed", 00:17:25.368 "digest": "sha512", 00:17:25.368 "dhgroup": "ffdhe3072" 00:17:25.368 } 00:17:25.368 } 00:17:25.368 ]' 00:17:25.368 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.627 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.627 19:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.627 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:25.627 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.627 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.627 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.627 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.893 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.463 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.722 00:17:26.722 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.722 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.722 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.981 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.981 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.981 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.981 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.981 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.981 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.981 { 00:17:26.981 "cntlid": 115, 00:17:26.981 "qid": 0, 00:17:26.981 "state": "enabled", 00:17:26.981 "thread": "nvmf_tgt_poll_group_000", 00:17:26.981 "listen_address": { 00:17:26.981 "trtype": "TCP", 00:17:26.981 "adrfam": "IPv4", 00:17:26.981 "traddr": "10.0.0.2", 00:17:26.981 "trsvcid": "4420" 00:17:26.981 }, 00:17:26.981 "peer_address": { 00:17:26.981 "trtype": "TCP", 00:17:26.981 "adrfam": "IPv4", 00:17:26.981 "traddr": "10.0.0.1", 00:17:26.981 "trsvcid": "40924" 00:17:26.981 }, 00:17:26.981 "auth": { 00:17:26.981 "state": "completed", 00:17:26.981 "digest": "sha512", 00:17:26.981 "dhgroup": "ffdhe3072" 00:17:26.981 } 00:17:26.981 } 00:17:26.981 ]' 00:17:26.981 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.981 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.981 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.981 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:26.981 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.981 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.981 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.981 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.241 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:17:27.810 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.810 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.810 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.810 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.810 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.810 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.810 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:27.810 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:28.070 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:28.070 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.070 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:28.070 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:28.070 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:28.070 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.070 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.070 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.070 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.070 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.070 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.070 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.330 00:17:28.330 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.330 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.330 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.330 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.330 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.330 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.330 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.330 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.330 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.331 { 00:17:28.331 "cntlid": 117, 00:17:28.331 "qid": 0, 00:17:28.331 "state": "enabled", 00:17:28.331 "thread": "nvmf_tgt_poll_group_000", 00:17:28.331 "listen_address": { 00:17:28.331 "trtype": "TCP", 00:17:28.331 "adrfam": "IPv4", 00:17:28.331 "traddr": "10.0.0.2", 00:17:28.331 "trsvcid": "4420" 00:17:28.331 }, 00:17:28.331 "peer_address": { 00:17:28.331 "trtype": "TCP", 00:17:28.331 "adrfam": "IPv4", 00:17:28.331 "traddr": "10.0.0.1", 00:17:28.331 "trsvcid": "40960" 00:17:28.331 }, 00:17:28.331 "auth": { 00:17:28.331 "state": "completed", 00:17:28.331 "digest": "sha512", 00:17:28.331 "dhgroup": "ffdhe3072" 00:17:28.331 } 00:17:28.331 } 00:17:28.331 ]' 00:17:28.331 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.591 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.591 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.591 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.591 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.591 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.591 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.591 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.851 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.422 19:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.682 00:17:29.682 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.682 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.682 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.943 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.943 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.943 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.943 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.943 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.943 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.943 { 00:17:29.943 "cntlid": 119, 00:17:29.943 "qid": 0, 00:17:29.943 "state": "enabled", 00:17:29.943 "thread": "nvmf_tgt_poll_group_000", 00:17:29.943 "listen_address": { 00:17:29.943 "trtype": "TCP", 00:17:29.943 "adrfam": "IPv4", 00:17:29.943 "traddr": "10.0.0.2", 00:17:29.943 "trsvcid": "4420" 00:17:29.943 }, 00:17:29.943 "peer_address": { 00:17:29.943 "trtype": "TCP", 00:17:29.943 "adrfam": "IPv4", 00:17:29.943 "traddr": "10.0.0.1", 00:17:29.943 "trsvcid": "40978" 00:17:29.943 }, 00:17:29.943 "auth": { 00:17:29.943 "state": "completed", 00:17:29.943 "digest": "sha512", 00:17:29.943 "dhgroup": "ffdhe3072" 00:17:29.943 } 00:17:29.943 } 00:17:29.943 ]' 00:17:29.943 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.943 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.943 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.943 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:29.943 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.943 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.943 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.943 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.203 19:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:17:30.773 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.773 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.773 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.773 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.773 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.773 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.773 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.773 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:30.773 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:31.033 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:31.033 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.033 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:31.033 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:31.033 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:31.033 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.033 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.033 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.033 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.033 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.033 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.033 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.293 00:17:31.293 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.293 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.293 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.293 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.293 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.293 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.293 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.293 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.293 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.293 { 00:17:31.293 "cntlid": 121, 00:17:31.293 "qid": 0, 00:17:31.293 "state": "enabled", 00:17:31.293 "thread": "nvmf_tgt_poll_group_000", 00:17:31.293 "listen_address": { 00:17:31.293 "trtype": "TCP", 00:17:31.293 "adrfam": "IPv4", 00:17:31.293 "traddr": "10.0.0.2", 00:17:31.293 "trsvcid": "4420" 00:17:31.293 }, 00:17:31.293 "peer_address": { 00:17:31.293 "trtype": "TCP", 00:17:31.293 "adrfam": "IPv4", 00:17:31.293 "traddr": "10.0.0.1", 00:17:31.293 "trsvcid": "41000" 00:17:31.293 }, 00:17:31.293 "auth": { 00:17:31.293 "state": "completed", 00:17:31.293 "digest": "sha512", 00:17:31.293 "dhgroup": "ffdhe4096" 00:17:31.293 } 00:17:31.293 } 00:17:31.293 ]' 00:17:31.293 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.553 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.553 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.553 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:31.553 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.553 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.553 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.553 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.813 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:17:32.381 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.381 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.381 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.382 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.642 00:17:32.642 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.642 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.642 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.902 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.902 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.902 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.902 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.902 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.902 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.902 { 00:17:32.902 "cntlid": 123, 00:17:32.902 "qid": 0, 00:17:32.902 "state": "enabled", 00:17:32.902 "thread": "nvmf_tgt_poll_group_000", 00:17:32.902 "listen_address": { 00:17:32.902 "trtype": "TCP", 00:17:32.902 "adrfam": "IPv4", 00:17:32.902 "traddr": "10.0.0.2", 00:17:32.902 "trsvcid": "4420" 00:17:32.902 }, 00:17:32.902 "peer_address": { 00:17:32.902 "trtype": "TCP", 00:17:32.902 "adrfam": "IPv4", 00:17:32.902 "traddr": "10.0.0.1", 00:17:32.902 "trsvcid": "41030" 00:17:32.902 }, 00:17:32.902 "auth": { 00:17:32.902 "state": "completed", 00:17:32.902 "digest": "sha512", 00:17:32.902 "dhgroup": "ffdhe4096" 00:17:32.902 } 00:17:32.902 } 00:17:32.902 ]' 00:17:32.902 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.902 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.902 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.902 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:32.902 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.162 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.162 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.162 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.162 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:17:33.732 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.732 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.732 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.732 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.732 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.732 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.732 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:33.732 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:33.992 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:33.992 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.992 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:33.992 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:33.992 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:33.992 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.992 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.992 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.992 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.992 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.992 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.992 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.252 00:17:34.252 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.252 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.252 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.512 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.512 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.512 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.512 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.512 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.512 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.512 { 00:17:34.512 "cntlid": 125, 00:17:34.512 "qid": 0, 00:17:34.512 "state": "enabled", 00:17:34.512 "thread": "nvmf_tgt_poll_group_000", 00:17:34.512 "listen_address": { 00:17:34.512 "trtype": "TCP", 00:17:34.512 "adrfam": "IPv4", 00:17:34.512 "traddr": "10.0.0.2", 00:17:34.512 "trsvcid": "4420" 00:17:34.512 }, 00:17:34.512 "peer_address": { 00:17:34.512 "trtype": "TCP", 00:17:34.512 "adrfam": "IPv4", 00:17:34.512 "traddr": "10.0.0.1", 00:17:34.512 "trsvcid": "41050" 00:17:34.512 }, 00:17:34.512 "auth": { 00:17:34.512 "state": "completed", 00:17:34.512 "digest": "sha512", 00:17:34.512 "dhgroup": "ffdhe4096" 00:17:34.512 } 00:17:34.512 } 00:17:34.512 ]' 00:17:34.512 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.512 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.512 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.512 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:34.512 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.512 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.512 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.512 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.772 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.342 19:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.630 00:17:35.630 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.630 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.630 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.898 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.898 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.898 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.898 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.898 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.898 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.898 { 00:17:35.898 "cntlid": 127, 00:17:35.898 "qid": 0, 00:17:35.898 "state": "enabled", 00:17:35.898 "thread": "nvmf_tgt_poll_group_000", 00:17:35.898 "listen_address": { 00:17:35.898 "trtype": "TCP", 00:17:35.898 "adrfam": "IPv4", 00:17:35.898 "traddr": "10.0.0.2", 00:17:35.898 "trsvcid": "4420" 00:17:35.898 }, 00:17:35.898 "peer_address": { 00:17:35.898 "trtype": "TCP", 00:17:35.898 "adrfam": "IPv4", 00:17:35.898 "traddr": "10.0.0.1", 00:17:35.898 "trsvcid": "33222" 00:17:35.898 }, 00:17:35.898 "auth": { 00:17:35.898 "state": "completed", 00:17:35.898 "digest": "sha512", 00:17:35.898 "dhgroup": "ffdhe4096" 00:17:35.898 } 00:17:35.898 } 00:17:35.898 ]' 00:17:35.898 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.898 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.898 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.898 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:35.898 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.158 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.158 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.158 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.158 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:17:36.727 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.727 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.727 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.727 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.727 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.727 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.728 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.728 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.728 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.988 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:36.988 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.988 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:36.988 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:36.988 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:36.988 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.988 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.988 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.988 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.988 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.988 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.988 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.248 00:17:37.248 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.248 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.248 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.508 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.508 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.508 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.508 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.509 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.509 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.509 { 00:17:37.509 "cntlid": 129, 00:17:37.509 "qid": 0, 00:17:37.509 "state": "enabled", 00:17:37.509 "thread": "nvmf_tgt_poll_group_000", 00:17:37.509 "listen_address": { 00:17:37.509 "trtype": "TCP", 00:17:37.509 "adrfam": "IPv4", 00:17:37.509 "traddr": "10.0.0.2", 00:17:37.509 "trsvcid": "4420" 00:17:37.509 }, 00:17:37.509 "peer_address": { 00:17:37.509 "trtype": "TCP", 00:17:37.509 "adrfam": "IPv4", 00:17:37.509 "traddr": "10.0.0.1", 00:17:37.509 "trsvcid": "33256" 00:17:37.509 }, 00:17:37.509 "auth": { 00:17:37.509 "state": "completed", 00:17:37.509 "digest": "sha512", 00:17:37.509 "dhgroup": "ffdhe6144" 00:17:37.509 } 00:17:37.509 } 00:17:37.509 ]' 00:17:37.509 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.509 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.509 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.509 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:37.509 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.509 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.509 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.509 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.769 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:17:38.339 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.339 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.339 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.339 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.339 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.339 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.339 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:38.339 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:38.599 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:38.599 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.599 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:38.599 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:38.599 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:38.599 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.599 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.599 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.599 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.599 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.599 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.599 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.859 00:17:38.859 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.859 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.859 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.119 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.119 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.119 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.119 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.119 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.119 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.119 { 00:17:39.119 "cntlid": 131, 00:17:39.119 "qid": 0, 00:17:39.119 "state": "enabled", 00:17:39.119 "thread": "nvmf_tgt_poll_group_000", 00:17:39.119 "listen_address": { 00:17:39.119 "trtype": "TCP", 00:17:39.119 "adrfam": "IPv4", 00:17:39.119 "traddr": "10.0.0.2", 00:17:39.119 "trsvcid": "4420" 00:17:39.119 }, 00:17:39.119 "peer_address": { 00:17:39.119 "trtype": "TCP", 00:17:39.119 "adrfam": "IPv4", 00:17:39.119 "traddr": "10.0.0.1", 00:17:39.119 "trsvcid": "33284" 00:17:39.119 }, 00:17:39.119 "auth": { 00:17:39.119 "state": "completed", 00:17:39.119 "digest": "sha512", 00:17:39.119 "dhgroup": "ffdhe6144" 00:17:39.119 } 00:17:39.119 } 00:17:39.119 ]' 00:17:39.119 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.119 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.119 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.119 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.119 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.119 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.119 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.119 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.379 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.948 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.518 00:17:40.518 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.518 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.518 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.518 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.518 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.518 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.518 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.518 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.518 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.518 { 00:17:40.518 "cntlid": 133, 00:17:40.518 "qid": 0, 00:17:40.518 "state": "enabled", 00:17:40.518 "thread": "nvmf_tgt_poll_group_000", 00:17:40.518 "listen_address": { 00:17:40.518 "trtype": "TCP", 00:17:40.518 "adrfam": "IPv4", 00:17:40.518 "traddr": "10.0.0.2", 00:17:40.518 "trsvcid": "4420" 00:17:40.518 }, 00:17:40.518 "peer_address": { 00:17:40.518 "trtype": "TCP", 00:17:40.518 "adrfam": "IPv4", 00:17:40.518 "traddr": "10.0.0.1", 00:17:40.518 "trsvcid": "33316" 00:17:40.518 }, 00:17:40.518 "auth": { 00:17:40.518 "state": "completed", 00:17:40.518 "digest": "sha512", 00:17:40.518 "dhgroup": "ffdhe6144" 00:17:40.518 } 00:17:40.518 } 00:17:40.518 ]' 00:17:40.518 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.518 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.518 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.778 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:40.778 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.778 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.778 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.778 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.778 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:17:41.348 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.348 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.348 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.348 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.348 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.348 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.348 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:41.348 19:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:41.608 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:41.608 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.608 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:41.608 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:41.608 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:41.608 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.608 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:41.608 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.608 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.608 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.608 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.608 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.868 00:17:41.868 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.868 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.868 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.127 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.127 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.127 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.127 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.127 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.127 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.127 { 00:17:42.127 "cntlid": 135, 00:17:42.127 "qid": 0, 00:17:42.127 "state": "enabled", 00:17:42.127 "thread": "nvmf_tgt_poll_group_000", 00:17:42.127 "listen_address": { 00:17:42.127 "trtype": "TCP", 00:17:42.127 "adrfam": "IPv4", 00:17:42.127 "traddr": "10.0.0.2", 00:17:42.127 "trsvcid": "4420" 00:17:42.127 }, 00:17:42.127 "peer_address": { 00:17:42.127 "trtype": "TCP", 00:17:42.127 "adrfam": "IPv4", 00:17:42.127 "traddr": "10.0.0.1", 00:17:42.127 "trsvcid": "33342" 00:17:42.127 }, 00:17:42.127 "auth": { 00:17:42.127 "state": "completed", 00:17:42.127 "digest": "sha512", 00:17:42.127 "dhgroup": "ffdhe6144" 00:17:42.127 } 00:17:42.127 } 00:17:42.127 ]' 00:17:42.127 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.127 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.127 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.127 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:42.127 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.386 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.386 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.386 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.386 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:17:42.955 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.955 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.955 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.955 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.955 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.955 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.955 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.955 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:42.955 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:43.214 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:43.214 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.215 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.215 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:43.215 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:43.215 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.215 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.215 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.215 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.215 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.215 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.215 19:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.816 00:17:43.816 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.816 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.816 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.816 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.816 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.816 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.816 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.816 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.816 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.816 { 00:17:43.816 "cntlid": 137, 00:17:43.816 "qid": 0, 00:17:43.816 "state": "enabled", 00:17:43.816 "thread": "nvmf_tgt_poll_group_000", 00:17:43.816 "listen_address": { 00:17:43.816 "trtype": "TCP", 00:17:43.816 "adrfam": "IPv4", 00:17:43.816 "traddr": "10.0.0.2", 00:17:43.816 "trsvcid": "4420" 00:17:43.816 }, 00:17:43.816 "peer_address": { 00:17:43.816 "trtype": "TCP", 00:17:43.816 "adrfam": "IPv4", 00:17:43.816 "traddr": "10.0.0.1", 00:17:43.816 "trsvcid": "33362" 00:17:43.816 }, 00:17:43.816 "auth": { 00:17:43.816 "state": "completed", 00:17:43.816 "digest": "sha512", 00:17:43.816 "dhgroup": "ffdhe8192" 00:17:43.816 } 00:17:43.816 } 00:17:43.816 ]' 00:17:43.816 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.816 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.816 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.816 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.816 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.076 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.076 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.076 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.076 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:17:44.647 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.647 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.647 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.647 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.647 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.647 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.647 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:44.647 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:44.907 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:44.907 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.907 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:44.907 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:44.907 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:44.907 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.907 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.907 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.907 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.907 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.907 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.907 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.476 00:17:45.476 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.476 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.476 19:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.476 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.476 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.476 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.476 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.476 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.476 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.476 { 00:17:45.476 "cntlid": 139, 00:17:45.476 "qid": 0, 00:17:45.476 "state": "enabled", 00:17:45.476 "thread": "nvmf_tgt_poll_group_000", 00:17:45.476 "listen_address": { 00:17:45.476 "trtype": "TCP", 00:17:45.476 "adrfam": "IPv4", 00:17:45.476 "traddr": "10.0.0.2", 00:17:45.476 "trsvcid": "4420" 00:17:45.476 }, 00:17:45.476 "peer_address": { 00:17:45.476 "trtype": "TCP", 00:17:45.476 "adrfam": "IPv4", 00:17:45.476 "traddr": "10.0.0.1", 00:17:45.476 "trsvcid": "37274" 00:17:45.476 }, 00:17:45.476 "auth": { 00:17:45.476 "state": "completed", 00:17:45.476 "digest": "sha512", 00:17:45.476 "dhgroup": "ffdhe8192" 00:17:45.476 } 00:17:45.476 } 00:17:45.476 ]' 00:17:45.476 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.737 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.737 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.737 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:45.737 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.737 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.737 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.737 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.997 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQ0OGNlNWE3Mjg0Y2E3ZDI5Njg0NGQzNWYwOTU1MGHainNt: --dhchap-ctrl-secret DHHC-1:02:ZTFmMjM5NjhlNGZjNjEyOTRjYmExN2Y3M2M5ZmQ3NTFhYjI1N2IyMzg2ZWY0ODMyJzCBfg==: 00:17:46.567 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.567 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.567 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.567 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.567 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.567 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.567 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:46.567 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:46.567 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:46.567 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.567 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:46.567 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:46.567 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:46.567 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.567 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.567 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.567 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.567 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.567 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.567 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.137 00:17:47.137 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.137 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.137 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.397 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.397 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.397 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.397 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.397 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.397 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.397 { 00:17:47.397 "cntlid": 141, 00:17:47.397 "qid": 0, 00:17:47.397 "state": "enabled", 00:17:47.397 "thread": "nvmf_tgt_poll_group_000", 00:17:47.397 "listen_address": { 00:17:47.397 "trtype": "TCP", 00:17:47.397 "adrfam": "IPv4", 00:17:47.397 "traddr": "10.0.0.2", 00:17:47.397 "trsvcid": "4420" 00:17:47.397 }, 00:17:47.397 "peer_address": { 00:17:47.397 "trtype": "TCP", 00:17:47.397 "adrfam": "IPv4", 00:17:47.397 "traddr": "10.0.0.1", 00:17:47.397 "trsvcid": "37298" 00:17:47.397 }, 00:17:47.397 "auth": { 00:17:47.397 "state": "completed", 00:17:47.397 "digest": "sha512", 00:17:47.397 "dhgroup": "ffdhe8192" 00:17:47.397 } 00:17:47.397 } 00:17:47.397 ]' 00:17:47.397 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.397 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.397 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.397 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.397 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.397 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.397 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.397 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.657 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZGNiYWZhZjNiY2EyYTdhNTVjOWU5MzNhNmVhNTYzODVlNzI1ZTMzYzAyZDA5ODMxlV7hEQ==: --dhchap-ctrl-secret DHHC-1:01:Y2I5YzcyNmViZTBhYzM0Y2I1MWNmZTYyNmI4ZThiMzU/+MQw: 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.227 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.797 00:17:48.797 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.797 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.797 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.058 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.058 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.058 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.058 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.058 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.058 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.058 { 00:17:49.058 "cntlid": 143, 00:17:49.058 "qid": 0, 00:17:49.058 "state": "enabled", 00:17:49.058 "thread": "nvmf_tgt_poll_group_000", 00:17:49.058 "listen_address": { 00:17:49.058 "trtype": "TCP", 00:17:49.058 "adrfam": "IPv4", 00:17:49.058 "traddr": "10.0.0.2", 00:17:49.058 "trsvcid": "4420" 00:17:49.058 }, 00:17:49.058 "peer_address": { 00:17:49.058 "trtype": "TCP", 00:17:49.058 "adrfam": "IPv4", 00:17:49.058 "traddr": "10.0.0.1", 00:17:49.058 "trsvcid": "37332" 00:17:49.058 }, 00:17:49.058 "auth": { 00:17:49.058 "state": "completed", 00:17:49.058 "digest": "sha512", 00:17:49.058 "dhgroup": "ffdhe8192" 00:17:49.058 } 00:17:49.058 } 00:17:49.058 ]' 00:17:49.058 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.058 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.058 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.058 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:49.058 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.058 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.058 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.058 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.318 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:17:49.889 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.889 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.889 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.889 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.889 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.889 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:49.889 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:49.889 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:49.889 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:49.889 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:49.889 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:50.164 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:50.164 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.164 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.164 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:50.164 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:50.164 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.164 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.164 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.164 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.164 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.164 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.164 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.455 00:17:50.455 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.455 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.455 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.715 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.715 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.715 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.715 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.715 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.715 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.715 { 00:17:50.715 "cntlid": 145, 00:17:50.715 "qid": 0, 00:17:50.715 "state": "enabled", 00:17:50.715 "thread": "nvmf_tgt_poll_group_000", 00:17:50.715 "listen_address": { 00:17:50.715 "trtype": "TCP", 00:17:50.715 "adrfam": "IPv4", 00:17:50.715 "traddr": "10.0.0.2", 00:17:50.715 "trsvcid": "4420" 00:17:50.715 }, 00:17:50.715 "peer_address": { 00:17:50.715 "trtype": "TCP", 00:17:50.715 "adrfam": "IPv4", 00:17:50.715 "traddr": "10.0.0.1", 00:17:50.715 "trsvcid": "37350" 00:17:50.715 }, 00:17:50.715 "auth": { 00:17:50.715 "state": "completed", 00:17:50.715 "digest": "sha512", 00:17:50.715 "dhgroup": "ffdhe8192" 00:17:50.715 } 00:17:50.715 } 00:17:50.715 ]' 00:17:50.715 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.715 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.715 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.715 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.715 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.715 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.715 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.715 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.975 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YjMwOGI3MzE3Y2YwNTgxY2FmMGE5YjFiZGUyNWI4NDRhMDg0OGZmNzcxNTBkMWRle9M19Q==: --dhchap-ctrl-secret DHHC-1:03:MGZiYTJmNzFmZTRiNzlmMTJkNTlmZjU1OTBlODEwMmRiNjQ0MjRkZjAyODhiYWM2ZmYyMDYyYjcwOGU5ODVkMWs2Yf0=: 00:17:51.545 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.545 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.545 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.545 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.545 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.545 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:51.545 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.545 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.545 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.545 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:51.545 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:51.545 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:51.545 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:51.545 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.546 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:51.546 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.546 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:51.546 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:52.114 request: 00:17:52.114 { 00:17:52.114 "name": "nvme0", 00:17:52.114 "trtype": "tcp", 00:17:52.114 "traddr": "10.0.0.2", 00:17:52.114 "adrfam": "ipv4", 00:17:52.114 "trsvcid": "4420", 00:17:52.114 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:52.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:52.114 "prchk_reftag": false, 00:17:52.114 "prchk_guard": false, 00:17:52.114 "hdgst": false, 00:17:52.114 "ddgst": false, 00:17:52.114 "dhchap_key": "key2", 00:17:52.114 "method": "bdev_nvme_attach_controller", 00:17:52.114 "req_id": 1 00:17:52.115 } 00:17:52.115 Got JSON-RPC error response 00:17:52.115 response: 00:17:52.115 { 00:17:52.115 "code": -5, 00:17:52.115 "message": "Input/output error" 00:17:52.115 } 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:52.115 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:52.374 request: 00:17:52.374 { 00:17:52.374 "name": "nvme0", 00:17:52.374 "trtype": "tcp", 00:17:52.374 "traddr": "10.0.0.2", 00:17:52.374 "adrfam": "ipv4", 00:17:52.374 "trsvcid": "4420", 00:17:52.374 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:52.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:52.374 "prchk_reftag": false, 00:17:52.374 "prchk_guard": false, 00:17:52.374 "hdgst": false, 00:17:52.374 "ddgst": false, 00:17:52.374 "dhchap_key": "key1", 00:17:52.374 "dhchap_ctrlr_key": "ckey2", 00:17:52.374 "method": "bdev_nvme_attach_controller", 00:17:52.374 "req_id": 1 00:17:52.374 } 00:17:52.374 Got JSON-RPC error response 00:17:52.374 response: 00:17:52.374 { 00:17:52.374 "code": -5, 00:17:52.374 "message": "Input/output error" 00:17:52.374 } 00:17:52.374 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:52.374 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:52.374 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:52.374 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:52.374 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.375 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.375 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.375 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.375 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:52.375 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.375 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.375 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.375 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.375 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:52.375 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.375 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:52.375 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.375 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:52.375 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.375 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.375 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.945 request: 00:17:52.945 { 00:17:52.945 "name": "nvme0", 00:17:52.945 "trtype": "tcp", 00:17:52.945 "traddr": "10.0.0.2", 00:17:52.945 "adrfam": "ipv4", 00:17:52.945 "trsvcid": "4420", 00:17:52.945 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:52.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:52.945 "prchk_reftag": false, 00:17:52.945 "prchk_guard": false, 00:17:52.945 "hdgst": false, 00:17:52.945 "ddgst": false, 00:17:52.945 "dhchap_key": "key1", 00:17:52.945 "dhchap_ctrlr_key": "ckey1", 00:17:52.945 "method": "bdev_nvme_attach_controller", 00:17:52.945 "req_id": 1 00:17:52.945 } 00:17:52.945 Got JSON-RPC error response 00:17:52.945 response: 00:17:52.945 { 00:17:52.945 "code": -5, 00:17:52.945 "message": "Input/output error" 00:17:52.945 } 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2044378 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2044378 ']' 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2044378 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2044378 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2044378' 00:17:52.945 killing process with pid 2044378 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2044378 00:17:52.945 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2044378 00:17:53.205 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:53.205 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:53.205 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:53.205 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.205 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2064699 00:17:53.205 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2064699 00:17:53.205 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:53.205 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2064699 ']' 00:17:53.205 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.205 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:53.205 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.205 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:53.205 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2064699 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2064699 ']' 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.146 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.407 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.407 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:54.407 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.407 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.407 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:54.407 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:54.407 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.407 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:54.407 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.407 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.407 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.407 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.408 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.978 00:17:54.978 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.978 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.978 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.978 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.978 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.978 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.978 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.978 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.978 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.978 { 00:17:54.978 "cntlid": 1, 00:17:54.978 "qid": 0, 00:17:54.978 "state": "enabled", 00:17:54.978 "thread": "nvmf_tgt_poll_group_000", 00:17:54.978 "listen_address": { 00:17:54.978 "trtype": "TCP", 00:17:54.978 "adrfam": "IPv4", 00:17:54.978 "traddr": "10.0.0.2", 00:17:54.978 "trsvcid": "4420" 00:17:54.978 }, 00:17:54.978 "peer_address": { 00:17:54.978 "trtype": "TCP", 00:17:54.978 "adrfam": "IPv4", 00:17:54.978 "traddr": "10.0.0.1", 00:17:54.978 "trsvcid": "37418" 00:17:54.978 }, 00:17:54.978 "auth": { 00:17:54.978 "state": "completed", 00:17:54.978 "digest": "sha512", 00:17:54.978 "dhgroup": "ffdhe8192" 00:17:54.978 } 00:17:54.978 } 00:17:54.978 ]' 00:17:54.978 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.978 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.978 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.238 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.238 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.238 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.238 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.238 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.238 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTk1MzY3NDBkNTcxYTZmN2Y4MGNmMGVlNzE3ZDU1MzJiY2NmODcxODc0YTllYmNjOWQ4ZTlmMzhmYmRjNDE3ORpiQYs=: 00:17:55.808 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.808 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:55.808 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.808 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.808 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.808 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:55.808 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.808 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.808 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.808 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:55.808 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:56.068 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.068 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:56.068 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.068 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:56.068 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.068 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:56.068 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.068 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.068 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.328 request: 00:17:56.328 { 00:17:56.328 "name": "nvme0", 00:17:56.328 "trtype": "tcp", 00:17:56.328 "traddr": "10.0.0.2", 00:17:56.328 "adrfam": "ipv4", 00:17:56.328 "trsvcid": "4420", 00:17:56.328 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:56.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:56.328 "prchk_reftag": false, 00:17:56.328 "prchk_guard": false, 00:17:56.328 "hdgst": false, 00:17:56.328 "ddgst": false, 00:17:56.328 "dhchap_key": "key3", 00:17:56.328 "method": "bdev_nvme_attach_controller", 00:17:56.328 "req_id": 1 00:17:56.328 } 00:17:56.328 Got JSON-RPC error response 00:17:56.328 response: 00:17:56.328 { 00:17:56.328 "code": -5, 00:17:56.328 "message": "Input/output error" 00:17:56.328 } 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.328 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.588 request: 00:17:56.588 { 00:17:56.588 "name": "nvme0", 00:17:56.588 "trtype": "tcp", 00:17:56.588 "traddr": "10.0.0.2", 00:17:56.588 "adrfam": "ipv4", 00:17:56.588 "trsvcid": "4420", 00:17:56.588 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:56.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:56.588 "prchk_reftag": false, 00:17:56.588 "prchk_guard": false, 00:17:56.588 "hdgst": false, 00:17:56.588 "ddgst": false, 00:17:56.588 "dhchap_key": "key3", 00:17:56.588 "method": "bdev_nvme_attach_controller", 00:17:56.588 "req_id": 1 00:17:56.588 } 00:17:56.588 Got JSON-RPC error response 00:17:56.588 response: 00:17:56.588 { 00:17:56.588 "code": -5, 00:17:56.588 "message": "Input/output error" 00:17:56.588 } 00:17:56.588 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:56.588 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:56.588 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:56.588 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:56.588 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:56.588 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:56.588 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:56.588 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:56.588 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:56.588 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:56.848 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:57.108 request: 00:17:57.108 { 00:17:57.108 "name": "nvme0", 00:17:57.108 "trtype": "tcp", 00:17:57.108 "traddr": "10.0.0.2", 00:17:57.108 "adrfam": "ipv4", 00:17:57.108 "trsvcid": "4420", 00:17:57.108 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:57.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:57.108 "prchk_reftag": false, 00:17:57.108 "prchk_guard": false, 00:17:57.108 "hdgst": false, 00:17:57.108 "ddgst": false, 00:17:57.108 "dhchap_key": "key0", 00:17:57.108 "dhchap_ctrlr_key": "key1", 00:17:57.108 "method": "bdev_nvme_attach_controller", 00:17:57.108 "req_id": 1 00:17:57.108 } 00:17:57.108 Got JSON-RPC error response 00:17:57.108 response: 00:17:57.108 { 00:17:57.108 "code": -5, 00:17:57.108 "message": "Input/output error" 00:17:57.108 } 00:17:57.108 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:57.108 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:57.108 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:57.108 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:57.108 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:57.108 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:57.368 00:17:57.368 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:57.368 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:57.368 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.368 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.368 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.368 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.629 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:57.629 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:57.629 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2044610 00:17:57.629 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2044610 ']' 00:17:57.629 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2044610 00:17:57.629 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:57.629 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.629 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2044610 00:17:57.629 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:57.629 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:57.629 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2044610' 00:17:57.629 killing process with pid 2044610 00:17:57.629 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2044610 00:17:57.629 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2044610 00:17:57.889 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:57.889 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:57.889 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:57.889 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:57.889 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:57.889 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:57.889 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:57.889 rmmod nvme_tcp 00:17:57.889 rmmod nvme_fabrics 00:17:58.149 rmmod nvme_keyring 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2064699 ']' 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2064699 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2064699 ']' 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2064699 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2064699 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2064699' 00:17:58.150 killing process with pid 2064699 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2064699 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2064699 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.150 19:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Mi0 /tmp/spdk.key-sha256.6ya /tmp/spdk.key-sha384.88l /tmp/spdk.key-sha512.KZl /tmp/spdk.key-sha512.qY3 /tmp/spdk.key-sha384.JWw /tmp/spdk.key-sha256.TvB '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:00.691 00:18:00.691 real 2m9.095s 00:18:00.691 user 4m57.249s 00:18:00.691 sys 0m18.532s 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.691 ************************************ 00:18:00.691 END TEST nvmf_auth_target 00:18:00.691 ************************************ 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:00.691 ************************************ 00:18:00.691 START TEST nvmf_bdevio_no_huge 00:18:00.691 ************************************ 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:00.691 * Looking for test storage... 00:18:00.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.691 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:00.692 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:04.887 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:04.887 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:04.887 Found net devices under 0000:86:00.0: cvl_0_0 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.887 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:04.888 Found net devices under 0000:86:00.1: cvl_0_1 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:04.888 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:05.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:18:05.148 00:18:05.148 --- 10.0.0.2 ping statistics --- 00:18:05.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.148 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:05.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:18:05.148 00:18:05.148 --- 10.0.0.1 ping statistics --- 00:18:05.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.148 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2068922 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2068922 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2068922 ']' 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:05.148 19:53:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:05.409 [2024-07-24 19:53:56.788734] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:18:05.409 [2024-07-24 19:53:56.788783] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:05.409 [2024-07-24 19:53:56.852671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:05.409 [2024-07-24 19:53:56.939122] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.409 [2024-07-24 19:53:56.939155] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.409 [2024-07-24 19:53:56.939162] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.409 [2024-07-24 19:53:56.939167] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.409 [2024-07-24 19:53:56.939172] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.409 [2024-07-24 19:53:56.939297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:05.409 [2024-07-24 19:53:56.939408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:05.409 [2024-07-24 19:53:56.939514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:05.409 [2024-07-24 19:53:56.939515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:06.347 [2024-07-24 19:53:57.638426] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:06.347 Malloc0 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:06.347 [2024-07-24 19:53:57.674638] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:06.347 { 00:18:06.347 "params": { 00:18:06.347 "name": "Nvme$subsystem", 00:18:06.347 "trtype": "$TEST_TRANSPORT", 00:18:06.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:06.347 "adrfam": "ipv4", 00:18:06.347 "trsvcid": "$NVMF_PORT", 00:18:06.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:06.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:06.347 "hdgst": ${hdgst:-false}, 00:18:06.347 "ddgst": ${ddgst:-false} 00:18:06.347 }, 00:18:06.347 "method": "bdev_nvme_attach_controller" 00:18:06.347 } 00:18:06.347 EOF 00:18:06.347 )") 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:06.347 19:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:06.347 "params": { 00:18:06.347 "name": "Nvme1", 00:18:06.347 "trtype": "tcp", 00:18:06.347 "traddr": "10.0.0.2", 00:18:06.347 "adrfam": "ipv4", 00:18:06.347 "trsvcid": "4420", 00:18:06.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:06.348 "hdgst": false, 00:18:06.348 "ddgst": false 00:18:06.348 }, 00:18:06.348 "method": "bdev_nvme_attach_controller" 00:18:06.348 }' 00:18:06.348 [2024-07-24 19:53:57.712099] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:18:06.348 [2024-07-24 19:53:57.712148] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2068994 ] 00:18:06.348 [2024-07-24 19:53:57.769201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:06.348 [2024-07-24 19:53:57.855213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.348 [2024-07-24 19:53:57.855309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.348 [2024-07-24 19:53:57.855311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.606 I/O targets: 00:18:06.606 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:06.606 00:18:06.606 00:18:06.606 CUnit - A unit testing framework for C - Version 2.1-3 00:18:06.606 http://cunit.sourceforge.net/ 00:18:06.606 00:18:06.606 00:18:06.606 Suite: bdevio tests on: Nvme1n1 00:18:06.864 Test: blockdev write read block ...passed 00:18:06.865 Test: blockdev write zeroes read block ...passed 00:18:06.865 Test: blockdev write zeroes read no split ...passed 00:18:06.865 Test: blockdev write zeroes read split ...passed 00:18:06.865 Test: blockdev write zeroes read split partial ...passed 00:18:06.865 Test: blockdev reset ...[2024-07-24 19:53:58.411677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:06.865 [2024-07-24 19:53:58.411742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1718300 (9): Bad file descriptor 00:18:06.865 [2024-07-24 19:53:58.431734] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:06.865 passed 00:18:06.865 Test: blockdev write read 8 blocks ...passed 00:18:06.865 Test: blockdev write read size > 128k ...passed 00:18:06.865 Test: blockdev write read invalid size ...passed 00:18:07.125 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:07.125 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:07.125 Test: blockdev write read max offset ...passed 00:18:07.125 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:07.125 Test: blockdev writev readv 8 blocks ...passed 00:18:07.125 Test: blockdev writev readv 30 x 1block ...passed 00:18:07.125 Test: blockdev writev readv block ...passed 00:18:07.125 Test: blockdev writev readv size > 128k ...passed 00:18:07.125 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:07.125 Test: blockdev comparev and writev ...[2024-07-24 19:53:58.625904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:07.125 [2024-07-24 19:53:58.625930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.125 [2024-07-24 19:53:58.625943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:07.125 [2024-07-24 19:53:58.625951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.125 [2024-07-24 19:53:58.626521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:07.125 [2024-07-24 19:53:58.626532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.125 [2024-07-24 19:53:58.626544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:07.125 [2024-07-24 19:53:58.626551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:07.125 [2024-07-24 19:53:58.627040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:07.125 [2024-07-24 19:53:58.627054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:07.125 [2024-07-24 19:53:58.627065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:07.125 [2024-07-24 19:53:58.627072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:07.125 [2024-07-24 19:53:58.627659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:07.125 [2024-07-24 19:53:58.627669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:07.125 [2024-07-24 19:53:58.627680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:07.125 [2024-07-24 19:53:58.627687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:07.125 passed 00:18:07.125 Test: blockdev nvme passthru rw ...passed 00:18:07.125 Test: blockdev nvme passthru vendor specific ...[2024-07-24 19:53:58.711901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:07.125 [2024-07-24 19:53:58.711917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:07.125 [2024-07-24 19:53:58.712281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:07.125 [2024-07-24 19:53:58.712292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:07.125 [2024-07-24 19:53:58.712661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:07.125 [2024-07-24 19:53:58.712670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:07.125 [2024-07-24 19:53:58.713070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:07.125 [2024-07-24 19:53:58.713080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:07.125 passed 00:18:07.384 Test: blockdev nvme admin passthru ...passed 00:18:07.384 Test: blockdev copy ...passed 00:18:07.384 00:18:07.384 Run Summary: Type Total Ran Passed Failed Inactive 00:18:07.384 suites 1 1 n/a 0 0 00:18:07.384 tests 23 23 23 0 0 00:18:07.384 asserts 152 152 152 0 n/a 00:18:07.384 00:18:07.384 Elapsed time = 1.222 seconds 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:07.673 rmmod nvme_tcp 00:18:07.673 rmmod nvme_fabrics 00:18:07.673 rmmod nvme_keyring 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2068922 ']' 00:18:07.673 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2068922 00:18:07.674 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2068922 ']' 00:18:07.674 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2068922 00:18:07.674 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:07.674 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:07.674 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2068922 00:18:07.674 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:07.674 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:07.674 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2068922' 00:18:07.674 killing process with pid 2068922 00:18:07.674 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2068922 00:18:07.674 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2068922 00:18:07.933 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:07.933 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:07.933 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:07.933 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.933 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:07.933 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.933 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.933 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:10.473 00:18:10.473 real 0m9.670s 00:18:10.473 user 0m13.621s 00:18:10.473 sys 0m4.409s 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:10.473 ************************************ 00:18:10.473 END TEST nvmf_bdevio_no_huge 00:18:10.473 ************************************ 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:10.473 ************************************ 00:18:10.473 START TEST nvmf_tls 00:18:10.473 ************************************ 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:10.473 * Looking for test storage... 00:18:10.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.473 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.474 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:10.474 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:10.474 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:10.474 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:15.755 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:15.755 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:15.755 Found net devices under 0000:86:00.0: cvl_0_0 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:15.755 Found net devices under 0000:86:00.1: cvl_0_1 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:15.755 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:15.756 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:15.756 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.756 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.756 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.756 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:15.756 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.756 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.756 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:15.756 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.756 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.756 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:15.756 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:15.756 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.756 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:15.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:18:15.756 00:18:15.756 --- 10.0.0.2 ping statistics --- 00:18:15.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.756 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:15.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:18:15.756 00:18:15.756 --- 10.0.0.1 ping statistics --- 00:18:15.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.756 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2072863 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2072863 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2072863 ']' 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:15.756 19:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.756 [2024-07-24 19:54:07.306231] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:18:15.756 [2024-07-24 19:54:07.306271] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.756 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.016 [2024-07-24 19:54:07.365846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.016 [2024-07-24 19:54:07.444277] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.017 [2024-07-24 19:54:07.444314] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.017 [2024-07-24 19:54:07.444321] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.017 [2024-07-24 19:54:07.444327] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.017 [2024-07-24 19:54:07.444332] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.017 [2024-07-24 19:54:07.444356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.585 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:16.585 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:16.585 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:16.585 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:16.585 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.585 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.585 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:16.585 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:16.844 true 00:18:16.844 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:16.844 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:17.104 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:17.104 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:17.104 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:17.104 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:17.104 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:17.363 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:17.363 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:17.363 19:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:17.621 19:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:17.621 19:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:17.621 19:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:17.621 19:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:17.621 19:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:17.621 19:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:17.879 19:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:17.879 19:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:17.879 19:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:18.138 19:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.138 19:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:18.138 19:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:18.138 19:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:18.138 19:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:18.397 19:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:18.397 19:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.8jBtd4e8Od 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.lMkrdMqnG4 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.8jBtd4e8Od 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.lMkrdMqnG4 00:18:18.656 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:18.916 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:19.175 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.8jBtd4e8Od 00:18:19.175 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.8jBtd4e8Od 00:18:19.175 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:19.175 [2024-07-24 19:54:10.758335] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.433 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:19.433 19:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:19.692 [2024-07-24 19:54:11.075157] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:19.692 [2024-07-24 19:54:11.075347] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.692 19:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:19.692 malloc0 00:18:19.692 19:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:19.952 19:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8jBtd4e8Od 00:18:20.212 [2024-07-24 19:54:11.600701] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:20.212 19:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.8jBtd4e8Od 00:18:20.212 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.197 Initializing NVMe Controllers 00:18:30.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:30.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:30.197 Initialization complete. Launching workers. 00:18:30.197 ======================================================== 00:18:30.197 Latency(us) 00:18:30.197 Device Information : IOPS MiB/s Average min max 00:18:30.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16473.66 64.35 3885.39 882.79 5312.33 00:18:30.197 ======================================================== 00:18:30.197 Total : 16473.66 64.35 3885.39 882.79 5312.33 00:18:30.197 00:18:30.197 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8jBtd4e8Od 00:18:30.197 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:30.197 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:30.197 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:30.197 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8jBtd4e8Od' 00:18:30.197 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.197 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2075611 00:18:30.197 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:30.197 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:30.197 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2075611 /var/tmp/bdevperf.sock 00:18:30.197 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2075611 ']' 00:18:30.197 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.197 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:30.197 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.197 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:30.197 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.197 [2024-07-24 19:54:21.770511] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:18:30.197 [2024-07-24 19:54:21.770559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2075611 ] 00:18:30.197 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.457 [2024-07-24 19:54:21.821106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.457 [2024-07-24 19:54:21.899498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.026 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:31.026 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:31.026 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8jBtd4e8Od 00:18:31.286 [2024-07-24 19:54:22.745298] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:31.286 [2024-07-24 19:54:22.745363] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:31.286 TLSTESTn1 00:18:31.287 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:31.545 Running I/O for 10 seconds... 00:18:41.569 00:18:41.569 Latency(us) 00:18:41.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.569 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:41.569 Verification LBA range: start 0x0 length 0x2000 00:18:41.569 TLSTESTn1 : 10.07 1314.43 5.13 0.00 0.00 97080.49 7123.48 136770.78 00:18:41.569 =================================================================================================================== 00:18:41.569 Total : 1314.43 5.13 0.00 0.00 97080.49 7123.48 136770.78 00:18:41.569 0 00:18:41.569 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:41.569 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2075611 00:18:41.569 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2075611 ']' 00:18:41.569 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2075611 00:18:41.569 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:41.569 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:41.569 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2075611 00:18:41.569 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:41.569 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:41.569 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2075611' 00:18:41.569 killing process with pid 2075611 00:18:41.569 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2075611 00:18:41.569 Received shutdown signal, test time was about 10.000000 seconds 00:18:41.569 00:18:41.569 Latency(us) 00:18:41.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.569 =================================================================================================================== 00:18:41.569 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:41.569 [2024-07-24 19:54:33.115608] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:41.569 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2075611 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lMkrdMqnG4 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lMkrdMqnG4 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lMkrdMqnG4 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lMkrdMqnG4' 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2077560 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2077560 /var/tmp/bdevperf.sock 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2077560 ']' 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:41.828 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.828 [2024-07-24 19:54:33.345033] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:18:41.828 [2024-07-24 19:54:33.345088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077560 ] 00:18:41.828 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.828 [2024-07-24 19:54:33.394339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.086 [2024-07-24 19:54:33.476209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.653 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:42.653 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:42.653 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lMkrdMqnG4 00:18:42.912 [2024-07-24 19:54:34.327472] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.912 [2024-07-24 19:54:34.327538] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:42.912 [2024-07-24 19:54:34.332483] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:42.912 [2024-07-24 19:54:34.333097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248d570 (107): Transport endpoint is not connected 00:18:42.912 [2024-07-24 19:54:34.334088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248d570 (9): Bad file descriptor 00:18:42.912 [2024-07-24 19:54:34.335089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:42.912 [2024-07-24 19:54:34.335099] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:42.912 [2024-07-24 19:54:34.335108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:42.912 request: 00:18:42.912 { 00:18:42.912 "name": "TLSTEST", 00:18:42.912 "trtype": "tcp", 00:18:42.912 "traddr": "10.0.0.2", 00:18:42.912 "adrfam": "ipv4", 00:18:42.912 "trsvcid": "4420", 00:18:42.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.912 "prchk_reftag": false, 00:18:42.912 "prchk_guard": false, 00:18:42.912 "hdgst": false, 00:18:42.912 "ddgst": false, 00:18:42.912 "psk": "/tmp/tmp.lMkrdMqnG4", 00:18:42.912 "method": "bdev_nvme_attach_controller", 00:18:42.912 "req_id": 1 00:18:42.912 } 00:18:42.912 Got JSON-RPC error response 00:18:42.912 response: 00:18:42.912 { 00:18:42.912 "code": -5, 00:18:42.912 "message": "Input/output error" 00:18:42.912 } 00:18:42.912 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2077560 00:18:42.912 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2077560 ']' 00:18:42.912 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2077560 00:18:42.912 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:42.912 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:42.912 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2077560 00:18:42.912 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:42.912 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:42.912 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2077560' 00:18:42.912 killing process with pid 2077560 00:18:42.912 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2077560 00:18:42.912 Received shutdown signal, test time was about 10.000000 seconds 00:18:42.912 00:18:42.912 Latency(us) 00:18:42.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.912 =================================================================================================================== 00:18:42.912 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:42.912 [2024-07-24 19:54:34.390311] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:42.912 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2077560 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8jBtd4e8Od 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8jBtd4e8Od 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8jBtd4e8Od 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8jBtd4e8Od' 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:43.170 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2077710 00:18:43.171 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:43.171 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:43.171 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2077710 /var/tmp/bdevperf.sock 00:18:43.171 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2077710 ']' 00:18:43.171 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.171 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:43.171 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.171 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:43.171 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.171 [2024-07-24 19:54:34.613984] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:18:43.171 [2024-07-24 19:54:34.614030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077710 ] 00:18:43.171 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.171 [2024-07-24 19:54:34.663493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.171 [2024-07-24 19:54:34.741111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.107 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.107 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:44.107 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.8jBtd4e8Od 00:18:44.107 [2024-07-24 19:54:35.578980] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:44.107 [2024-07-24 19:54:35.579048] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:44.107 [2024-07-24 19:54:35.584746] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:44.107 [2024-07-24 19:54:35.584769] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:44.107 [2024-07-24 19:54:35.584793] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:44.107 [2024-07-24 19:54:35.585582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b3570 (107): Transport endpoint is not connected 00:18:44.107 [2024-07-24 19:54:35.586577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b3570 (9): Bad file descriptor 00:18:44.107 [2024-07-24 19:54:35.587578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:44.107 [2024-07-24 19:54:35.587587] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:44.107 [2024-07-24 19:54:35.587595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:44.107 request: 00:18:44.107 { 00:18:44.107 "name": "TLSTEST", 00:18:44.107 "trtype": "tcp", 00:18:44.107 "traddr": "10.0.0.2", 00:18:44.107 "adrfam": "ipv4", 00:18:44.107 "trsvcid": "4420", 00:18:44.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.107 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:44.107 "prchk_reftag": false, 00:18:44.107 "prchk_guard": false, 00:18:44.107 "hdgst": false, 00:18:44.107 "ddgst": false, 00:18:44.107 "psk": "/tmp/tmp.8jBtd4e8Od", 00:18:44.107 "method": "bdev_nvme_attach_controller", 00:18:44.107 "req_id": 1 00:18:44.107 } 00:18:44.107 Got JSON-RPC error response 00:18:44.107 response: 00:18:44.107 { 00:18:44.107 "code": -5, 00:18:44.107 "message": "Input/output error" 00:18:44.107 } 00:18:44.107 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2077710 00:18:44.107 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2077710 ']' 00:18:44.107 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2077710 00:18:44.107 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:44.107 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:44.107 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2077710 00:18:44.107 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:44.107 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:44.107 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2077710' 00:18:44.107 killing process with pid 2077710 00:18:44.107 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2077710 00:18:44.107 Received shutdown signal, test time was about 10.000000 seconds 00:18:44.107 00:18:44.107 Latency(us) 00:18:44.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.107 =================================================================================================================== 00:18:44.107 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:44.107 [2024-07-24 19:54:35.661867] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:44.107 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2077710 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8jBtd4e8Od 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8jBtd4e8Od 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8jBtd4e8Od 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8jBtd4e8Od' 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2077926 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2077926 /var/tmp/bdevperf.sock 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2077926 ']' 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.366 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.366 [2024-07-24 19:54:35.886268] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:18:44.366 [2024-07-24 19:54:35.886313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077926 ] 00:18:44.366 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.366 [2024-07-24 19:54:35.935230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.624 [2024-07-24 19:54:36.007980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.191 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:45.191 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:45.191 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8jBtd4e8Od 00:18:45.450 [2024-07-24 19:54:36.853909] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.450 [2024-07-24 19:54:36.853976] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:45.450 [2024-07-24 19:54:36.858605] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:45.450 [2024-07-24 19:54:36.858630] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:45.450 [2024-07-24 19:54:36.858654] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:45.450 [2024-07-24 19:54:36.859376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f73570 (107): Transport endpoint is not connected 00:18:45.450 [2024-07-24 19:54:36.860368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f73570 (9): Bad file descriptor 00:18:45.450 [2024-07-24 19:54:36.861369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:45.450 [2024-07-24 19:54:36.861378] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:45.450 [2024-07-24 19:54:36.861388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:45.450 request: 00:18:45.450 { 00:18:45.450 "name": "TLSTEST", 00:18:45.450 "trtype": "tcp", 00:18:45.450 "traddr": "10.0.0.2", 00:18:45.450 "adrfam": "ipv4", 00:18:45.450 "trsvcid": "4420", 00:18:45.450 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:45.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.450 "prchk_reftag": false, 00:18:45.450 "prchk_guard": false, 00:18:45.450 "hdgst": false, 00:18:45.450 "ddgst": false, 00:18:45.450 "psk": "/tmp/tmp.8jBtd4e8Od", 00:18:45.450 "method": "bdev_nvme_attach_controller", 00:18:45.450 "req_id": 1 00:18:45.450 } 00:18:45.450 Got JSON-RPC error response 00:18:45.450 response: 00:18:45.450 { 00:18:45.450 "code": -5, 00:18:45.450 "message": "Input/output error" 00:18:45.450 } 00:18:45.450 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2077926 00:18:45.450 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2077926 ']' 00:18:45.450 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2077926 00:18:45.450 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:45.450 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:45.450 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2077926 00:18:45.450 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:45.450 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:45.450 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2077926' 00:18:45.450 killing process with pid 2077926 00:18:45.450 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2077926 00:18:45.450 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.450 00:18:45.450 Latency(us) 00:18:45.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.450 =================================================================================================================== 00:18:45.450 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:45.450 [2024-07-24 19:54:36.921962] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:45.450 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2077926 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2078164 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2078164 /var/tmp/bdevperf.sock 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2078164 ']' 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:45.710 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.710 [2024-07-24 19:54:37.144327] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:18:45.710 [2024-07-24 19:54:37.144370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2078164 ] 00:18:45.710 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.710 [2024-07-24 19:54:37.194254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.710 [2024-07-24 19:54:37.262062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.647 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:46.647 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:46.647 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:46.647 [2024-07-24 19:54:38.091198] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:46.647 [2024-07-24 19:54:38.093342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d27af0 (9): Bad file descriptor 00:18:46.647 [2024-07-24 19:54:38.094341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:46.647 [2024-07-24 19:54:38.094351] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:46.647 [2024-07-24 19:54:38.094360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:46.647 request: 00:18:46.647 { 00:18:46.647 "name": "TLSTEST", 00:18:46.647 "trtype": "tcp", 00:18:46.647 "traddr": "10.0.0.2", 00:18:46.647 "adrfam": "ipv4", 00:18:46.647 "trsvcid": "4420", 00:18:46.647 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.647 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.647 "prchk_reftag": false, 00:18:46.647 "prchk_guard": false, 00:18:46.647 "hdgst": false, 00:18:46.647 "ddgst": false, 00:18:46.647 "method": "bdev_nvme_attach_controller", 00:18:46.647 "req_id": 1 00:18:46.647 } 00:18:46.647 Got JSON-RPC error response 00:18:46.647 response: 00:18:46.647 { 00:18:46.647 "code": -5, 00:18:46.647 "message": "Input/output error" 00:18:46.647 } 00:18:46.647 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2078164 00:18:46.647 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2078164 ']' 00:18:46.647 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2078164 00:18:46.647 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:46.647 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:46.647 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2078164 00:18:46.647 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:46.647 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:46.647 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2078164' 00:18:46.647 killing process with pid 2078164 00:18:46.647 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2078164 00:18:46.647 Received shutdown signal, test time was about 10.000000 seconds 00:18:46.647 00:18:46.647 Latency(us) 00:18:46.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.647 =================================================================================================================== 00:18:46.648 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:46.648 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2078164 00:18:46.907 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:46.907 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:46.907 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:46.907 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:46.908 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:46.908 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 2072863 00:18:46.908 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2072863 ']' 00:18:46.908 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2072863 00:18:46.908 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:46.908 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:46.908 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2072863 00:18:46.908 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:46.908 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:46.908 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2072863' 00:18:46.908 killing process with pid 2072863 00:18:46.908 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2072863 00:18:46.908 [2024-07-24 19:54:38.374105] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:46.908 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2072863 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.uHR6okePbr 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.uHR6okePbr 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2078411 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2078411 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2078411 ']' 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:47.168 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.168 [2024-07-24 19:54:38.664050] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:18:47.168 [2024-07-24 19:54:38.664097] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.168 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.168 [2024-07-24 19:54:38.721643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.428 [2024-07-24 19:54:38.790686] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.428 [2024-07-24 19:54:38.790726] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.428 [2024-07-24 19:54:38.790733] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.428 [2024-07-24 19:54:38.790739] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.428 [2024-07-24 19:54:38.790744] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.428 [2024-07-24 19:54:38.790761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.997 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:47.997 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:47.997 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:47.997 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:47.997 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.997 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.997 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.uHR6okePbr 00:18:47.997 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.uHR6okePbr 00:18:47.997 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:48.256 [2024-07-24 19:54:39.638291] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.256 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:48.256 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:48.516 [2024-07-24 19:54:39.979189] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:48.516 [2024-07-24 19:54:39.979368] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.516 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:48.776 malloc0 00:18:48.776 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:48.776 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uHR6okePbr 00:18:49.037 [2024-07-24 19:54:40.516781] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:49.037 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uHR6okePbr 00:18:49.037 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:49.037 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:49.037 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:49.037 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uHR6okePbr' 00:18:49.037 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:49.037 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2078885 00:18:49.037 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:49.037 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:49.037 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2078885 /var/tmp/bdevperf.sock 00:18:49.037 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2078885 ']' 00:18:49.037 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.037 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.037 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.037 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.037 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.037 [2024-07-24 19:54:40.577084] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:18:49.037 [2024-07-24 19:54:40.577132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2078885 ] 00:18:49.037 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.037 [2024-07-24 19:54:40.627893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.297 [2024-07-24 19:54:40.700554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.868 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:49.868 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:49.868 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uHR6okePbr 00:18:50.128 [2024-07-24 19:54:41.535241] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.128 [2024-07-24 19:54:41.535316] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:50.128 TLSTESTn1 00:18:50.128 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:50.387 Running I/O for 10 seconds... 00:19:00.406 00:19:00.406 Latency(us) 00:19:00.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.406 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:00.406 Verification LBA range: start 0x0 length 0x2000 00:19:00.406 TLSTESTn1 : 10.08 1340.80 5.24 0.00 0.00 95152.57 6724.56 145888.83 00:19:00.406 =================================================================================================================== 00:19:00.406 Total : 1340.80 5.24 0.00 0.00 95152.57 6724.56 145888.83 00:19:00.406 0 00:19:00.406 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:00.406 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2078885 00:19:00.406 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2078885 ']' 00:19:00.406 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2078885 00:19:00.406 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:00.406 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:00.406 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2078885 00:19:00.406 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:00.406 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:00.406 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2078885' 00:19:00.406 killing process with pid 2078885 00:19:00.406 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2078885 00:19:00.406 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.406 00:19:00.406 Latency(us) 00:19:00.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.406 =================================================================================================================== 00:19:00.406 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:00.406 [2024-07-24 19:54:51.912732] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:00.406 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2078885 00:19:00.666 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.uHR6okePbr 00:19:00.666 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uHR6okePbr 00:19:00.666 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:00.666 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uHR6okePbr 00:19:00.666 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:00.666 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:00.666 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:00.666 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:00.666 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uHR6okePbr 00:19:00.666 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:00.666 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:00.666 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:00.666 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uHR6okePbr' 00:19:00.666 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:00.667 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2080725 00:19:00.667 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:00.667 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:00.667 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2080725 /var/tmp/bdevperf.sock 00:19:00.667 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2080725 ']' 00:19:00.667 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.667 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:00.667 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.667 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:00.667 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.667 [2024-07-24 19:54:52.150903] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:19:00.667 [2024-07-24 19:54:52.150951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080725 ] 00:19:00.667 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.667 [2024-07-24 19:54:52.201348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.927 [2024-07-24 19:54:52.270308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.496 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.496 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:01.496 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uHR6okePbr 00:19:01.756 [2024-07-24 19:54:53.116987] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:01.756 [2024-07-24 19:54:53.117046] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:01.756 [2024-07-24 19:54:53.117053] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.uHR6okePbr 00:19:01.756 request: 00:19:01.756 { 00:19:01.756 "name": "TLSTEST", 00:19:01.756 "trtype": "tcp", 00:19:01.756 "traddr": "10.0.0.2", 00:19:01.756 "adrfam": "ipv4", 00:19:01.756 "trsvcid": "4420", 00:19:01.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.756 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.756 "prchk_reftag": false, 00:19:01.756 "prchk_guard": false, 00:19:01.756 "hdgst": false, 00:19:01.756 "ddgst": false, 00:19:01.756 "psk": "/tmp/tmp.uHR6okePbr", 00:19:01.756 "method": "bdev_nvme_attach_controller", 00:19:01.756 "req_id": 1 00:19:01.756 } 00:19:01.756 Got JSON-RPC error response 00:19:01.756 response: 00:19:01.756 { 00:19:01.756 "code": -1, 00:19:01.756 "message": "Operation not permitted" 00:19:01.756 } 00:19:01.756 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2080725 00:19:01.756 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2080725 ']' 00:19:01.756 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2080725 00:19:01.756 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:01.756 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:01.756 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2080725 00:19:01.756 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:01.756 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:01.756 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2080725' 00:19:01.756 killing process with pid 2080725 00:19:01.756 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2080725 00:19:01.756 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.756 00:19:01.756 Latency(us) 00:19:01.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.757 =================================================================================================================== 00:19:01.757 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:01.757 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2080725 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 2078411 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2078411 ']' 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2078411 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2078411 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2078411' 00:19:02.017 killing process with pid 2078411 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2078411 00:19:02.017 [2024-07-24 19:54:53.400265] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2078411 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2080972 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2080972 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2080972 ']' 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:02.017 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.277 [2024-07-24 19:54:53.642854] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:19:02.277 [2024-07-24 19:54:53.642898] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.277 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.277 [2024-07-24 19:54:53.700334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.277 [2024-07-24 19:54:53.767408] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.277 [2024-07-24 19:54:53.767447] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.277 [2024-07-24 19:54:53.767454] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.277 [2024-07-24 19:54:53.767460] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.277 [2024-07-24 19:54:53.767465] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.277 [2024-07-24 19:54:53.767486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.215 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:03.215 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:03.215 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:03.215 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:03.215 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.215 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.215 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.uHR6okePbr 00:19:03.215 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:03.216 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.uHR6okePbr 00:19:03.216 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:03.216 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:03.216 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:03.216 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:03.216 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.uHR6okePbr 00:19:03.216 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.uHR6okePbr 00:19:03.216 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:03.216 [2024-07-24 19:54:54.638982] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.216 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:03.475 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:03.475 [2024-07-24 19:54:54.967838] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.475 [2024-07-24 19:54:54.968014] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.475 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:03.735 malloc0 00:19:03.735 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:03.735 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uHR6okePbr 00:19:03.995 [2024-07-24 19:54:55.477339] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:03.995 [2024-07-24 19:54:55.477364] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:03.995 [2024-07-24 19:54:55.477385] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:03.995 request: 00:19:03.995 { 00:19:03.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.995 "host": "nqn.2016-06.io.spdk:host1", 00:19:03.995 "psk": "/tmp/tmp.uHR6okePbr", 00:19:03.995 "method": "nvmf_subsystem_add_host", 00:19:03.995 "req_id": 1 00:19:03.995 } 00:19:03.995 Got JSON-RPC error response 00:19:03.995 response: 00:19:03.995 { 00:19:03.995 "code": -32603, 00:19:03.995 "message": "Internal error" 00:19:03.995 } 00:19:03.995 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:03.995 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:03.995 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:03.995 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:03.995 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 2080972 00:19:03.995 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2080972 ']' 00:19:03.995 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2080972 00:19:03.995 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:03.995 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:03.995 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2080972 00:19:03.995 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:03.995 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:03.995 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2080972' 00:19:03.995 killing process with pid 2080972 00:19:03.995 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2080972 00:19:03.995 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2080972 00:19:04.256 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.uHR6okePbr 00:19:04.256 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:04.256 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:04.256 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:04.256 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.256 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2081346 00:19:04.256 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2081346 00:19:04.256 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:04.256 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2081346 ']' 00:19:04.256 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.256 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:04.256 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.256 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:04.256 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.256 [2024-07-24 19:54:55.790229] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:19:04.256 [2024-07-24 19:54:55.790274] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.256 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.256 [2024-07-24 19:54:55.848783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.516 [2024-07-24 19:54:55.919744] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.516 [2024-07-24 19:54:55.919787] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.516 [2024-07-24 19:54:55.919794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.516 [2024-07-24 19:54:55.919799] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.516 [2024-07-24 19:54:55.919804] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.516 [2024-07-24 19:54:55.919822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.087 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.087 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:05.087 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:05.087 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:05.087 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.087 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.087 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.uHR6okePbr 00:19:05.087 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.uHR6okePbr 00:19:05.087 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:05.347 [2024-07-24 19:54:56.763062] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.347 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:05.607 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:05.607 [2024-07-24 19:54:57.127997] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:05.607 [2024-07-24 19:54:57.128179] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.607 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:05.867 malloc0 00:19:05.867 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:06.127 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uHR6okePbr 00:19:06.127 [2024-07-24 19:54:57.645623] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:06.127 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:06.127 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2081717 00:19:06.127 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:06.127 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2081717 /var/tmp/bdevperf.sock 00:19:06.127 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2081717 ']' 00:19:06.127 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:06.127 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:06.127 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:06.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:06.127 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:06.127 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.127 [2024-07-24 19:54:57.708408] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:19:06.127 [2024-07-24 19:54:57.708451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081717 ] 00:19:06.388 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.388 [2024-07-24 19:54:57.758900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.388 [2024-07-24 19:54:57.832466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.957 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:06.957 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:06.957 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uHR6okePbr 00:19:07.217 [2024-07-24 19:54:58.675557] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:07.217 [2024-07-24 19:54:58.675632] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:07.217 TLSTESTn1 00:19:07.217 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:07.477 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:07.477 "subsystems": [ 00:19:07.477 { 00:19:07.477 "subsystem": "keyring", 00:19:07.477 "config": [] 00:19:07.477 }, 00:19:07.477 { 00:19:07.477 "subsystem": "iobuf", 00:19:07.477 "config": [ 00:19:07.477 { 00:19:07.477 "method": "iobuf_set_options", 00:19:07.477 "params": { 00:19:07.477 "small_pool_count": 8192, 00:19:07.477 "large_pool_count": 1024, 00:19:07.477 "small_bufsize": 8192, 00:19:07.477 "large_bufsize": 135168 00:19:07.477 } 00:19:07.477 } 00:19:07.477 ] 00:19:07.477 }, 00:19:07.477 { 00:19:07.477 "subsystem": "sock", 00:19:07.477 "config": [ 00:19:07.477 { 00:19:07.477 "method": "sock_set_default_impl", 00:19:07.477 "params": { 00:19:07.477 "impl_name": "posix" 00:19:07.477 } 00:19:07.477 }, 00:19:07.477 { 00:19:07.477 "method": "sock_impl_set_options", 00:19:07.477 "params": { 00:19:07.477 "impl_name": "ssl", 00:19:07.477 "recv_buf_size": 4096, 00:19:07.477 "send_buf_size": 4096, 00:19:07.477 "enable_recv_pipe": true, 00:19:07.477 "enable_quickack": false, 00:19:07.477 "enable_placement_id": 0, 00:19:07.477 "enable_zerocopy_send_server": true, 00:19:07.477 "enable_zerocopy_send_client": false, 00:19:07.477 "zerocopy_threshold": 0, 00:19:07.477 "tls_version": 0, 00:19:07.477 "enable_ktls": false 00:19:07.477 } 00:19:07.477 }, 00:19:07.477 { 00:19:07.477 "method": "sock_impl_set_options", 00:19:07.477 "params": { 00:19:07.477 "impl_name": "posix", 00:19:07.477 "recv_buf_size": 2097152, 00:19:07.477 "send_buf_size": 2097152, 00:19:07.477 "enable_recv_pipe": true, 00:19:07.477 "enable_quickack": false, 00:19:07.477 "enable_placement_id": 0, 00:19:07.477 "enable_zerocopy_send_server": true, 00:19:07.478 "enable_zerocopy_send_client": false, 00:19:07.478 "zerocopy_threshold": 0, 00:19:07.478 "tls_version": 0, 00:19:07.478 "enable_ktls": false 00:19:07.478 } 00:19:07.478 } 00:19:07.478 ] 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "subsystem": "vmd", 00:19:07.478 "config": [] 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "subsystem": "accel", 00:19:07.478 "config": [ 00:19:07.478 { 00:19:07.478 "method": "accel_set_options", 00:19:07.478 "params": { 00:19:07.478 "small_cache_size": 128, 00:19:07.478 "large_cache_size": 16, 00:19:07.478 "task_count": 2048, 00:19:07.478 "sequence_count": 2048, 00:19:07.478 "buf_count": 2048 00:19:07.478 } 00:19:07.478 } 00:19:07.478 ] 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "subsystem": "bdev", 00:19:07.478 "config": [ 00:19:07.478 { 00:19:07.478 "method": "bdev_set_options", 00:19:07.478 "params": { 00:19:07.478 "bdev_io_pool_size": 65535, 00:19:07.478 "bdev_io_cache_size": 256, 00:19:07.478 "bdev_auto_examine": true, 00:19:07.478 "iobuf_small_cache_size": 128, 00:19:07.478 "iobuf_large_cache_size": 16 00:19:07.478 } 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "method": "bdev_raid_set_options", 00:19:07.478 "params": { 00:19:07.478 "process_window_size_kb": 1024, 00:19:07.478 "process_max_bandwidth_mb_sec": 0 00:19:07.478 } 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "method": "bdev_iscsi_set_options", 00:19:07.478 "params": { 00:19:07.478 "timeout_sec": 30 00:19:07.478 } 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "method": "bdev_nvme_set_options", 00:19:07.478 "params": { 00:19:07.478 "action_on_timeout": "none", 00:19:07.478 "timeout_us": 0, 00:19:07.478 "timeout_admin_us": 0, 00:19:07.478 "keep_alive_timeout_ms": 10000, 00:19:07.478 "arbitration_burst": 0, 00:19:07.478 "low_priority_weight": 0, 00:19:07.478 "medium_priority_weight": 0, 00:19:07.478 "high_priority_weight": 0, 00:19:07.478 "nvme_adminq_poll_period_us": 10000, 00:19:07.478 "nvme_ioq_poll_period_us": 0, 00:19:07.478 "io_queue_requests": 0, 00:19:07.478 "delay_cmd_submit": true, 00:19:07.478 "transport_retry_count": 4, 00:19:07.478 "bdev_retry_count": 3, 00:19:07.478 "transport_ack_timeout": 0, 00:19:07.478 "ctrlr_loss_timeout_sec": 0, 00:19:07.478 "reconnect_delay_sec": 0, 00:19:07.478 "fast_io_fail_timeout_sec": 0, 00:19:07.478 "disable_auto_failback": false, 00:19:07.478 "generate_uuids": false, 00:19:07.478 "transport_tos": 0, 00:19:07.478 "nvme_error_stat": false, 00:19:07.478 "rdma_srq_size": 0, 00:19:07.478 "io_path_stat": false, 00:19:07.478 "allow_accel_sequence": false, 00:19:07.478 "rdma_max_cq_size": 0, 00:19:07.478 "rdma_cm_event_timeout_ms": 0, 00:19:07.478 "dhchap_digests": [ 00:19:07.478 "sha256", 00:19:07.478 "sha384", 00:19:07.478 "sha512" 00:19:07.478 ], 00:19:07.478 "dhchap_dhgroups": [ 00:19:07.478 "null", 00:19:07.478 "ffdhe2048", 00:19:07.478 "ffdhe3072", 00:19:07.478 "ffdhe4096", 00:19:07.478 "ffdhe6144", 00:19:07.478 "ffdhe8192" 00:19:07.478 ] 00:19:07.478 } 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "method": "bdev_nvme_set_hotplug", 00:19:07.478 "params": { 00:19:07.478 "period_us": 100000, 00:19:07.478 "enable": false 00:19:07.478 } 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "method": "bdev_malloc_create", 00:19:07.478 "params": { 00:19:07.478 "name": "malloc0", 00:19:07.478 "num_blocks": 8192, 00:19:07.478 "block_size": 4096, 00:19:07.478 "physical_block_size": 4096, 00:19:07.478 "uuid": "4772a9da-fc0c-4b97-bd7d-ea5fed7fdb8e", 00:19:07.478 "optimal_io_boundary": 0, 00:19:07.478 "md_size": 0, 00:19:07.478 "dif_type": 0, 00:19:07.478 "dif_is_head_of_md": false, 00:19:07.478 "dif_pi_format": 0 00:19:07.478 } 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "method": "bdev_wait_for_examine" 00:19:07.478 } 00:19:07.478 ] 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "subsystem": "nbd", 00:19:07.478 "config": [] 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "subsystem": "scheduler", 00:19:07.478 "config": [ 00:19:07.478 { 00:19:07.478 "method": "framework_set_scheduler", 00:19:07.478 "params": { 00:19:07.478 "name": "static" 00:19:07.478 } 00:19:07.478 } 00:19:07.478 ] 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "subsystem": "nvmf", 00:19:07.478 "config": [ 00:19:07.478 { 00:19:07.478 "method": "nvmf_set_config", 00:19:07.478 "params": { 00:19:07.478 "discovery_filter": "match_any", 00:19:07.478 "admin_cmd_passthru": { 00:19:07.478 "identify_ctrlr": false 00:19:07.478 } 00:19:07.478 } 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "method": "nvmf_set_max_subsystems", 00:19:07.478 "params": { 00:19:07.478 "max_subsystems": 1024 00:19:07.478 } 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "method": "nvmf_set_crdt", 00:19:07.478 "params": { 00:19:07.478 "crdt1": 0, 00:19:07.478 "crdt2": 0, 00:19:07.478 "crdt3": 0 00:19:07.478 } 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "method": "nvmf_create_transport", 00:19:07.478 "params": { 00:19:07.478 "trtype": "TCP", 00:19:07.478 "max_queue_depth": 128, 00:19:07.478 "max_io_qpairs_per_ctrlr": 127, 00:19:07.478 "in_capsule_data_size": 4096, 00:19:07.478 "max_io_size": 131072, 00:19:07.478 "io_unit_size": 131072, 00:19:07.478 "max_aq_depth": 128, 00:19:07.478 "num_shared_buffers": 511, 00:19:07.478 "buf_cache_size": 4294967295, 00:19:07.478 "dif_insert_or_strip": false, 00:19:07.478 "zcopy": false, 00:19:07.478 "c2h_success": false, 00:19:07.478 "sock_priority": 0, 00:19:07.478 "abort_timeout_sec": 1, 00:19:07.478 "ack_timeout": 0, 00:19:07.478 "data_wr_pool_size": 0 00:19:07.478 } 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "method": "nvmf_create_subsystem", 00:19:07.478 "params": { 00:19:07.478 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.478 "allow_any_host": false, 00:19:07.478 "serial_number": "SPDK00000000000001", 00:19:07.478 "model_number": "SPDK bdev Controller", 00:19:07.478 "max_namespaces": 10, 00:19:07.478 "min_cntlid": 1, 00:19:07.478 "max_cntlid": 65519, 00:19:07.478 "ana_reporting": false 00:19:07.478 } 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "method": "nvmf_subsystem_add_host", 00:19:07.478 "params": { 00:19:07.478 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.478 "host": "nqn.2016-06.io.spdk:host1", 00:19:07.478 "psk": "/tmp/tmp.uHR6okePbr" 00:19:07.478 } 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "method": "nvmf_subsystem_add_ns", 00:19:07.478 "params": { 00:19:07.478 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.478 "namespace": { 00:19:07.478 "nsid": 1, 00:19:07.478 "bdev_name": "malloc0", 00:19:07.478 "nguid": "4772A9DAFC0C4B97BD7DEA5FED7FDB8E", 00:19:07.478 "uuid": "4772a9da-fc0c-4b97-bd7d-ea5fed7fdb8e", 00:19:07.478 "no_auto_visible": false 00:19:07.478 } 00:19:07.478 } 00:19:07.478 }, 00:19:07.478 { 00:19:07.478 "method": "nvmf_subsystem_add_listener", 00:19:07.478 "params": { 00:19:07.478 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.479 "listen_address": { 00:19:07.479 "trtype": "TCP", 00:19:07.479 "adrfam": "IPv4", 00:19:07.479 "traddr": "10.0.0.2", 00:19:07.479 "trsvcid": "4420" 00:19:07.479 }, 00:19:07.479 "secure_channel": true 00:19:07.479 } 00:19:07.479 } 00:19:07.479 ] 00:19:07.479 } 00:19:07.479 ] 00:19:07.479 }' 00:19:07.479 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:07.739 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:07.739 "subsystems": [ 00:19:07.739 { 00:19:07.739 "subsystem": "keyring", 00:19:07.739 "config": [] 00:19:07.739 }, 00:19:07.739 { 00:19:07.739 "subsystem": "iobuf", 00:19:07.739 "config": [ 00:19:07.739 { 00:19:07.739 "method": "iobuf_set_options", 00:19:07.739 "params": { 00:19:07.739 "small_pool_count": 8192, 00:19:07.739 "large_pool_count": 1024, 00:19:07.739 "small_bufsize": 8192, 00:19:07.739 "large_bufsize": 135168 00:19:07.739 } 00:19:07.739 } 00:19:07.739 ] 00:19:07.739 }, 00:19:07.739 { 00:19:07.739 "subsystem": "sock", 00:19:07.739 "config": [ 00:19:07.739 { 00:19:07.739 "method": "sock_set_default_impl", 00:19:07.739 "params": { 00:19:07.739 "impl_name": "posix" 00:19:07.739 } 00:19:07.739 }, 00:19:07.739 { 00:19:07.739 "method": "sock_impl_set_options", 00:19:07.739 "params": { 00:19:07.739 "impl_name": "ssl", 00:19:07.739 "recv_buf_size": 4096, 00:19:07.739 "send_buf_size": 4096, 00:19:07.739 "enable_recv_pipe": true, 00:19:07.739 "enable_quickack": false, 00:19:07.739 "enable_placement_id": 0, 00:19:07.739 "enable_zerocopy_send_server": true, 00:19:07.739 "enable_zerocopy_send_client": false, 00:19:07.739 "zerocopy_threshold": 0, 00:19:07.739 "tls_version": 0, 00:19:07.739 "enable_ktls": false 00:19:07.739 } 00:19:07.739 }, 00:19:07.739 { 00:19:07.739 "method": "sock_impl_set_options", 00:19:07.739 "params": { 00:19:07.739 "impl_name": "posix", 00:19:07.739 "recv_buf_size": 2097152, 00:19:07.739 "send_buf_size": 2097152, 00:19:07.739 "enable_recv_pipe": true, 00:19:07.739 "enable_quickack": false, 00:19:07.739 "enable_placement_id": 0, 00:19:07.739 "enable_zerocopy_send_server": true, 00:19:07.739 "enable_zerocopy_send_client": false, 00:19:07.739 "zerocopy_threshold": 0, 00:19:07.739 "tls_version": 0, 00:19:07.739 "enable_ktls": false 00:19:07.739 } 00:19:07.739 } 00:19:07.739 ] 00:19:07.739 }, 00:19:07.739 { 00:19:07.739 "subsystem": "vmd", 00:19:07.739 "config": [] 00:19:07.739 }, 00:19:07.739 { 00:19:07.739 "subsystem": "accel", 00:19:07.739 "config": [ 00:19:07.739 { 00:19:07.739 "method": "accel_set_options", 00:19:07.739 "params": { 00:19:07.739 "small_cache_size": 128, 00:19:07.739 "large_cache_size": 16, 00:19:07.739 "task_count": 2048, 00:19:07.739 "sequence_count": 2048, 00:19:07.739 "buf_count": 2048 00:19:07.739 } 00:19:07.739 } 00:19:07.739 ] 00:19:07.739 }, 00:19:07.739 { 00:19:07.739 "subsystem": "bdev", 00:19:07.739 "config": [ 00:19:07.739 { 00:19:07.739 "method": "bdev_set_options", 00:19:07.739 "params": { 00:19:07.739 "bdev_io_pool_size": 65535, 00:19:07.739 "bdev_io_cache_size": 256, 00:19:07.739 "bdev_auto_examine": true, 00:19:07.739 "iobuf_small_cache_size": 128, 00:19:07.739 "iobuf_large_cache_size": 16 00:19:07.739 } 00:19:07.739 }, 00:19:07.739 { 00:19:07.739 "method": "bdev_raid_set_options", 00:19:07.739 "params": { 00:19:07.739 "process_window_size_kb": 1024, 00:19:07.739 "process_max_bandwidth_mb_sec": 0 00:19:07.739 } 00:19:07.739 }, 00:19:07.739 { 00:19:07.739 "method": "bdev_iscsi_set_options", 00:19:07.739 "params": { 00:19:07.739 "timeout_sec": 30 00:19:07.739 } 00:19:07.739 }, 00:19:07.739 { 00:19:07.739 "method": "bdev_nvme_set_options", 00:19:07.739 "params": { 00:19:07.739 "action_on_timeout": "none", 00:19:07.739 "timeout_us": 0, 00:19:07.739 "timeout_admin_us": 0, 00:19:07.740 "keep_alive_timeout_ms": 10000, 00:19:07.740 "arbitration_burst": 0, 00:19:07.740 "low_priority_weight": 0, 00:19:07.740 "medium_priority_weight": 0, 00:19:07.740 "high_priority_weight": 0, 00:19:07.740 "nvme_adminq_poll_period_us": 10000, 00:19:07.740 "nvme_ioq_poll_period_us": 0, 00:19:07.740 "io_queue_requests": 512, 00:19:07.740 "delay_cmd_submit": true, 00:19:07.740 "transport_retry_count": 4, 00:19:07.740 "bdev_retry_count": 3, 00:19:07.740 "transport_ack_timeout": 0, 00:19:07.740 "ctrlr_loss_timeout_sec": 0, 00:19:07.740 "reconnect_delay_sec": 0, 00:19:07.740 "fast_io_fail_timeout_sec": 0, 00:19:07.740 "disable_auto_failback": false, 00:19:07.740 "generate_uuids": false, 00:19:07.740 "transport_tos": 0, 00:19:07.740 "nvme_error_stat": false, 00:19:07.740 "rdma_srq_size": 0, 00:19:07.740 "io_path_stat": false, 00:19:07.740 "allow_accel_sequence": false, 00:19:07.740 "rdma_max_cq_size": 0, 00:19:07.740 "rdma_cm_event_timeout_ms": 0, 00:19:07.740 "dhchap_digests": [ 00:19:07.740 "sha256", 00:19:07.740 "sha384", 00:19:07.740 "sha512" 00:19:07.740 ], 00:19:07.740 "dhchap_dhgroups": [ 00:19:07.740 "null", 00:19:07.740 "ffdhe2048", 00:19:07.740 "ffdhe3072", 00:19:07.740 "ffdhe4096", 00:19:07.740 "ffdhe6144", 00:19:07.740 "ffdhe8192" 00:19:07.740 ] 00:19:07.740 } 00:19:07.740 }, 00:19:07.740 { 00:19:07.740 "method": "bdev_nvme_attach_controller", 00:19:07.740 "params": { 00:19:07.740 "name": "TLSTEST", 00:19:07.740 "trtype": "TCP", 00:19:07.740 "adrfam": "IPv4", 00:19:07.740 "traddr": "10.0.0.2", 00:19:07.740 "trsvcid": "4420", 00:19:07.740 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.740 "prchk_reftag": false, 00:19:07.740 "prchk_guard": false, 00:19:07.740 "ctrlr_loss_timeout_sec": 0, 00:19:07.740 "reconnect_delay_sec": 0, 00:19:07.740 "fast_io_fail_timeout_sec": 0, 00:19:07.740 "psk": "/tmp/tmp.uHR6okePbr", 00:19:07.740 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.740 "hdgst": false, 00:19:07.740 "ddgst": false 00:19:07.740 } 00:19:07.740 }, 00:19:07.740 { 00:19:07.740 "method": "bdev_nvme_set_hotplug", 00:19:07.740 "params": { 00:19:07.740 "period_us": 100000, 00:19:07.740 "enable": false 00:19:07.740 } 00:19:07.740 }, 00:19:07.740 { 00:19:07.740 "method": "bdev_wait_for_examine" 00:19:07.740 } 00:19:07.740 ] 00:19:07.740 }, 00:19:07.740 { 00:19:07.740 "subsystem": "nbd", 00:19:07.740 "config": [] 00:19:07.740 } 00:19:07.740 ] 00:19:07.740 }' 00:19:07.740 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 2081717 00:19:07.740 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2081717 ']' 00:19:07.740 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2081717 00:19:07.740 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:07.740 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:07.740 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2081717 00:19:07.740 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:07.740 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:07.740 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2081717' 00:19:07.740 killing process with pid 2081717 00:19:07.740 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2081717 00:19:07.740 Received shutdown signal, test time was about 10.000000 seconds 00:19:07.740 00:19:07.740 Latency(us) 00:19:07.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.740 =================================================================================================================== 00:19:07.740 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:07.740 [2024-07-24 19:54:59.311407] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:07.740 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2081717 00:19:08.000 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 2081346 00:19:08.000 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2081346 ']' 00:19:08.001 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2081346 00:19:08.001 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:08.001 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:08.001 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2081346 00:19:08.001 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:08.001 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:08.001 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2081346' 00:19:08.001 killing process with pid 2081346 00:19:08.001 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2081346 00:19:08.001 [2024-07-24 19:54:59.538863] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:08.001 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2081346 00:19:08.261 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:08.261 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:08.261 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:08.261 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:08.261 "subsystems": [ 00:19:08.261 { 00:19:08.261 "subsystem": "keyring", 00:19:08.261 "config": [] 00:19:08.261 }, 00:19:08.261 { 00:19:08.261 "subsystem": "iobuf", 00:19:08.261 "config": [ 00:19:08.261 { 00:19:08.261 "method": "iobuf_set_options", 00:19:08.261 "params": { 00:19:08.261 "small_pool_count": 8192, 00:19:08.261 "large_pool_count": 1024, 00:19:08.261 "small_bufsize": 8192, 00:19:08.261 "large_bufsize": 135168 00:19:08.261 } 00:19:08.261 } 00:19:08.261 ] 00:19:08.261 }, 00:19:08.261 { 00:19:08.261 "subsystem": "sock", 00:19:08.261 "config": [ 00:19:08.261 { 00:19:08.261 "method": "sock_set_default_impl", 00:19:08.261 "params": { 00:19:08.261 "impl_name": "posix" 00:19:08.261 } 00:19:08.261 }, 00:19:08.261 { 00:19:08.261 "method": "sock_impl_set_options", 00:19:08.261 "params": { 00:19:08.261 "impl_name": "ssl", 00:19:08.261 "recv_buf_size": 4096, 00:19:08.261 "send_buf_size": 4096, 00:19:08.261 "enable_recv_pipe": true, 00:19:08.261 "enable_quickack": false, 00:19:08.261 "enable_placement_id": 0, 00:19:08.261 "enable_zerocopy_send_server": true, 00:19:08.261 "enable_zerocopy_send_client": false, 00:19:08.261 "zerocopy_threshold": 0, 00:19:08.261 "tls_version": 0, 00:19:08.261 "enable_ktls": false 00:19:08.261 } 00:19:08.261 }, 00:19:08.261 { 00:19:08.261 "method": "sock_impl_set_options", 00:19:08.261 "params": { 00:19:08.261 "impl_name": "posix", 00:19:08.261 "recv_buf_size": 2097152, 00:19:08.261 "send_buf_size": 2097152, 00:19:08.261 "enable_recv_pipe": true, 00:19:08.261 "enable_quickack": false, 00:19:08.261 "enable_placement_id": 0, 00:19:08.261 "enable_zerocopy_send_server": true, 00:19:08.261 "enable_zerocopy_send_client": false, 00:19:08.261 "zerocopy_threshold": 0, 00:19:08.261 "tls_version": 0, 00:19:08.261 "enable_ktls": false 00:19:08.261 } 00:19:08.261 } 00:19:08.261 ] 00:19:08.261 }, 00:19:08.261 { 00:19:08.261 "subsystem": "vmd", 00:19:08.261 "config": [] 00:19:08.261 }, 00:19:08.261 { 00:19:08.261 "subsystem": "accel", 00:19:08.261 "config": [ 00:19:08.261 { 00:19:08.261 "method": "accel_set_options", 00:19:08.261 "params": { 00:19:08.261 "small_cache_size": 128, 00:19:08.261 "large_cache_size": 16, 00:19:08.261 "task_count": 2048, 00:19:08.261 "sequence_count": 2048, 00:19:08.261 "buf_count": 2048 00:19:08.261 } 00:19:08.261 } 00:19:08.261 ] 00:19:08.261 }, 00:19:08.261 { 00:19:08.261 "subsystem": "bdev", 00:19:08.261 "config": [ 00:19:08.261 { 00:19:08.261 "method": "bdev_set_options", 00:19:08.261 "params": { 00:19:08.261 "bdev_io_pool_size": 65535, 00:19:08.261 "bdev_io_cache_size": 256, 00:19:08.261 "bdev_auto_examine": true, 00:19:08.261 "iobuf_small_cache_size": 128, 00:19:08.261 "iobuf_large_cache_size": 16 00:19:08.261 } 00:19:08.261 }, 00:19:08.262 { 00:19:08.262 "method": "bdev_raid_set_options", 00:19:08.262 "params": { 00:19:08.262 "process_window_size_kb": 1024, 00:19:08.262 "process_max_bandwidth_mb_sec": 0 00:19:08.262 } 00:19:08.262 }, 00:19:08.262 { 00:19:08.262 "method": "bdev_iscsi_set_options", 00:19:08.262 "params": { 00:19:08.262 "timeout_sec": 30 00:19:08.262 } 00:19:08.262 }, 00:19:08.262 { 00:19:08.262 "method": "bdev_nvme_set_options", 00:19:08.262 "params": { 00:19:08.262 "action_on_timeout": "none", 00:19:08.262 "timeout_us": 0, 00:19:08.262 "timeout_admin_us": 0, 00:19:08.262 "keep_alive_timeout_ms": 10000, 00:19:08.262 "arbitration_burst": 0, 00:19:08.262 "low_priority_weight": 0, 00:19:08.262 "medium_priority_weight": 0, 00:19:08.262 "high_priority_weight": 0, 00:19:08.262 "nvme_adminq_poll_period_us": 10000, 00:19:08.262 "nvme_ioq_poll_period_us": 0, 00:19:08.262 "io_queue_requests": 0, 00:19:08.262 "delay_cmd_submit": true, 00:19:08.262 "transport_retry_count": 4, 00:19:08.262 "bdev_retry_count": 3, 00:19:08.262 "transport_ack_timeout": 0, 00:19:08.262 "ctrlr_loss_timeout_sec": 0, 00:19:08.262 "reconnect_delay_sec": 0, 00:19:08.262 "fast_io_fail_timeout_sec": 0, 00:19:08.262 "disable_auto_failback": false, 00:19:08.262 "generate_uuids": false, 00:19:08.262 "transport_tos": 0, 00:19:08.262 "nvme_error_stat": false, 00:19:08.262 "rdma_srq_size": 0, 00:19:08.262 "io_path_stat": false, 00:19:08.262 "allow_accel_sequence": false, 00:19:08.262 "rdma_max_cq_size": 0, 00:19:08.262 "rdma_cm_event_timeout_ms": 0, 00:19:08.262 "dhchap_digests": [ 00:19:08.262 "sha256", 00:19:08.262 "sha384", 00:19:08.262 "sha512" 00:19:08.262 ], 00:19:08.262 "dhchap_dhgroups": [ 00:19:08.262 "null", 00:19:08.262 "ffdhe2048", 00:19:08.262 "ffdhe3072", 00:19:08.262 "ffdhe4096", 00:19:08.262 "ffdhe6144", 00:19:08.262 "ffdhe8192" 00:19:08.262 ] 00:19:08.262 } 00:19:08.262 }, 00:19:08.262 { 00:19:08.262 "method": "bdev_nvme_set_hotplug", 00:19:08.262 "params": { 00:19:08.262 "period_us": 100000, 00:19:08.262 "enable": false 00:19:08.262 } 00:19:08.262 }, 00:19:08.262 { 00:19:08.262 "method": "bdev_malloc_create", 00:19:08.262 "params": { 00:19:08.262 "name": "malloc0", 00:19:08.262 "num_blocks": 8192, 00:19:08.262 "block_size": 4096, 00:19:08.262 "physical_block_size": 4096, 00:19:08.262 "uuid": "4772a9da-fc0c-4b97-bd7d-ea5fed7fdb8e", 00:19:08.262 "optimal_io_boundary": 0, 00:19:08.262 "md_size": 0, 00:19:08.262 "dif_type": 0, 00:19:08.262 "dif_is_head_of_md": false, 00:19:08.262 "dif_pi_format": 0 00:19:08.262 } 00:19:08.262 }, 00:19:08.262 { 00:19:08.262 "method": "bdev_wait_for_examine" 00:19:08.262 } 00:19:08.262 ] 00:19:08.262 }, 00:19:08.262 { 00:19:08.262 "subsystem": "nbd", 00:19:08.262 "config": [] 00:19:08.262 }, 00:19:08.262 { 00:19:08.262 "subsystem": "scheduler", 00:19:08.262 "config": [ 00:19:08.262 { 00:19:08.262 "method": "framework_set_scheduler", 00:19:08.262 "params": { 00:19:08.262 "name": "static" 00:19:08.262 } 00:19:08.262 } 00:19:08.262 ] 00:19:08.262 }, 00:19:08.262 { 00:19:08.262 "subsystem": "nvmf", 00:19:08.262 "config": [ 00:19:08.262 { 00:19:08.262 "method": "nvmf_set_config", 00:19:08.262 "params": { 00:19:08.262 "discovery_filter": "match_any", 00:19:08.262 "admin_cmd_passthru": { 00:19:08.262 "identify_ctrlr": false 00:19:08.262 } 00:19:08.262 } 00:19:08.262 }, 00:19:08.262 { 00:19:08.262 "method": "nvmf_set_max_subsystems", 00:19:08.262 "params": { 00:19:08.262 "max_subsystems": 1024 00:19:08.262 } 00:19:08.262 }, 00:19:08.262 { 00:19:08.262 "method": "nvmf_set_crdt", 00:19:08.262 "params": { 00:19:08.262 "crdt1": 0, 00:19:08.262 "crdt2": 0, 00:19:08.262 "crdt3": 0 00:19:08.262 } 00:19:08.262 }, 00:19:08.262 { 00:19:08.262 "method": "nvmf_create_transport", 00:19:08.262 "params": { 00:19:08.262 "trtype": "TCP", 00:19:08.262 "max_queue_depth": 128, 00:19:08.262 "max_io_qpairs_per_ctrlr": 127, 00:19:08.262 "in_capsule_data_size": 4096, 00:19:08.262 "max_io_size": 131072, 00:19:08.262 "io_unit_size": 131072, 00:19:08.262 "max_aq_depth": 128, 00:19:08.262 "num_shared_buffers": 511, 00:19:08.262 "buf_cache_size": 4294967295, 00:19:08.262 "dif_insert_or_strip": false, 00:19:08.262 "zcopy": false, 00:19:08.262 "c2h_success": false, 00:19:08.262 "sock_priority": 0, 00:19:08.262 "abort_timeout_sec": 1, 00:19:08.262 "ack_timeout": 0, 00:19:08.262 "data_wr_pool_size": 0 00:19:08.262 } 00:19:08.262 }, 00:19:08.262 { 00:19:08.262 "method": "nvmf_create_subsystem", 00:19:08.262 "params": { 00:19:08.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.262 "allow_any_host": false, 00:19:08.262 "serial_number": "SPDK00000000000001", 00:19:08.262 "model_number": "SPDK bdev Controller", 00:19:08.262 "max_namespaces": 10, 00:19:08.262 "min_cntlid": 1, 00:19:08.262 "max_cntlid": 65519, 00:19:08.262 "ana_reporting": false 00:19:08.262 } 00:19:08.262 }, 00:19:08.262 { 00:19:08.262 "method": "nvmf_subsystem_add_host", 00:19:08.262 "params": { 00:19:08.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.262 "host": "nqn.2016-06.io.spdk:host1", 00:19:08.262 "psk": "/tmp/tmp.uHR6okePbr" 00:19:08.262 } 00:19:08.262 }, 00:19:08.262 { 00:19:08.262 "method": "nvmf_subsystem_add_ns", 00:19:08.262 "params": { 00:19:08.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.262 "namespace": { 00:19:08.262 "nsid": 1, 00:19:08.262 "bdev_name": "malloc0", 00:19:08.262 "nguid": "4772A9DAFC0C4B97BD7DEA5FED7FDB8E", 00:19:08.262 "uuid": "4772a9da-fc0c-4b97-bd7d-ea5fed7fdb8e", 00:19:08.262 "no_auto_visible": false 00:19:08.262 } 00:19:08.262 } 00:19:08.262 }, 00:19:08.262 { 00:19:08.262 "method": "nvmf_subsystem_add_listener", 00:19:08.262 "params": { 00:19:08.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.262 "listen_address": { 00:19:08.262 "trtype": "TCP", 00:19:08.262 "adrfam": "IPv4", 00:19:08.262 "traddr": "10.0.0.2", 00:19:08.262 "trsvcid": "4420" 00:19:08.262 }, 00:19:08.262 "secure_channel": true 00:19:08.262 } 00:19:08.262 } 00:19:08.262 ] 00:19:08.262 } 00:19:08.262 ] 00:19:08.262 }' 00:19:08.262 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.262 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2081995 00:19:08.262 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2081995 00:19:08.262 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:08.262 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2081995 ']' 00:19:08.262 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.262 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:08.262 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.262 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:08.262 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.262 [2024-07-24 19:54:59.786287] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:19:08.263 [2024-07-24 19:54:59.786331] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.263 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.263 [2024-07-24 19:54:59.844822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.523 [2024-07-24 19:54:59.921935] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.523 [2024-07-24 19:54:59.921971] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.523 [2024-07-24 19:54:59.921977] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.523 [2024-07-24 19:54:59.921984] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.523 [2024-07-24 19:54:59.921989] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.523 [2024-07-24 19:54:59.922035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.783 [2024-07-24 19:55:00.126876] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.783 [2024-07-24 19:55:00.157228] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:08.783 [2024-07-24 19:55:00.173290] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:08.783 [2024-07-24 19:55:00.173457] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.044 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:09.044 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:09.044 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:09.044 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:09.044 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.044 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.044 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2082220 00:19:09.044 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2082220 /var/tmp/bdevperf.sock 00:19:09.044 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2082220 ']' 00:19:09.044 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.044 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:09.044 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:09.044 "subsystems": [ 00:19:09.044 { 00:19:09.044 "subsystem": "keyring", 00:19:09.044 "config": [] 00:19:09.044 }, 00:19:09.044 { 00:19:09.044 "subsystem": "iobuf", 00:19:09.044 "config": [ 00:19:09.044 { 00:19:09.044 "method": "iobuf_set_options", 00:19:09.044 "params": { 00:19:09.044 "small_pool_count": 8192, 00:19:09.044 "large_pool_count": 1024, 00:19:09.044 "small_bufsize": 8192, 00:19:09.044 "large_bufsize": 135168 00:19:09.044 } 00:19:09.044 } 00:19:09.044 ] 00:19:09.044 }, 00:19:09.044 { 00:19:09.044 "subsystem": "sock", 00:19:09.044 "config": [ 00:19:09.044 { 00:19:09.044 "method": "sock_set_default_impl", 00:19:09.044 "params": { 00:19:09.044 "impl_name": "posix" 00:19:09.044 } 00:19:09.044 }, 00:19:09.044 { 00:19:09.044 "method": "sock_impl_set_options", 00:19:09.044 "params": { 00:19:09.044 "impl_name": "ssl", 00:19:09.044 "recv_buf_size": 4096, 00:19:09.044 "send_buf_size": 4096, 00:19:09.044 "enable_recv_pipe": true, 00:19:09.044 "enable_quickack": false, 00:19:09.044 "enable_placement_id": 0, 00:19:09.044 "enable_zerocopy_send_server": true, 00:19:09.044 "enable_zerocopy_send_client": false, 00:19:09.044 "zerocopy_threshold": 0, 00:19:09.044 "tls_version": 0, 00:19:09.044 "enable_ktls": false 00:19:09.044 } 00:19:09.044 }, 00:19:09.044 { 00:19:09.044 "method": "sock_impl_set_options", 00:19:09.044 "params": { 00:19:09.044 "impl_name": "posix", 00:19:09.044 "recv_buf_size": 2097152, 00:19:09.044 "send_buf_size": 2097152, 00:19:09.044 "enable_recv_pipe": true, 00:19:09.044 "enable_quickack": false, 00:19:09.044 "enable_placement_id": 0, 00:19:09.044 "enable_zerocopy_send_server": true, 00:19:09.044 "enable_zerocopy_send_client": false, 00:19:09.044 "zerocopy_threshold": 0, 00:19:09.044 "tls_version": 0, 00:19:09.044 "enable_ktls": false 00:19:09.044 } 00:19:09.044 } 00:19:09.044 ] 00:19:09.044 }, 00:19:09.044 { 00:19:09.044 "subsystem": "vmd", 00:19:09.044 "config": [] 00:19:09.044 }, 00:19:09.044 { 00:19:09.044 "subsystem": "accel", 00:19:09.044 "config": [ 00:19:09.044 { 00:19:09.044 "method": "accel_set_options", 00:19:09.044 "params": { 00:19:09.044 "small_cache_size": 128, 00:19:09.044 "large_cache_size": 16, 00:19:09.044 "task_count": 2048, 00:19:09.044 "sequence_count": 2048, 00:19:09.044 "buf_count": 2048 00:19:09.044 } 00:19:09.044 } 00:19:09.044 ] 00:19:09.044 }, 00:19:09.044 { 00:19:09.044 "subsystem": "bdev", 00:19:09.044 "config": [ 00:19:09.044 { 00:19:09.044 "method": "bdev_set_options", 00:19:09.044 "params": { 00:19:09.044 "bdev_io_pool_size": 65535, 00:19:09.044 "bdev_io_cache_size": 256, 00:19:09.044 "bdev_auto_examine": true, 00:19:09.044 "iobuf_small_cache_size": 128, 00:19:09.044 "iobuf_large_cache_size": 16 00:19:09.044 } 00:19:09.044 }, 00:19:09.044 { 00:19:09.044 "method": "bdev_raid_set_options", 00:19:09.044 "params": { 00:19:09.044 "process_window_size_kb": 1024, 00:19:09.044 "process_max_bandwidth_mb_sec": 0 00:19:09.044 } 00:19:09.044 }, 00:19:09.044 { 00:19:09.044 "method": "bdev_iscsi_set_options", 00:19:09.044 "params": { 00:19:09.044 "timeout_sec": 30 00:19:09.044 } 00:19:09.044 }, 00:19:09.044 { 00:19:09.044 "method": "bdev_nvme_set_options", 00:19:09.044 "params": { 00:19:09.044 "action_on_timeout": "none", 00:19:09.044 "timeout_us": 0, 00:19:09.044 "timeout_admin_us": 0, 00:19:09.044 "keep_alive_timeout_ms": 10000, 00:19:09.044 "arbitration_burst": 0, 00:19:09.044 "low_priority_weight": 0, 00:19:09.044 "medium_priority_weight": 0, 00:19:09.044 "high_priority_weight": 0, 00:19:09.044 "nvme_adminq_poll_period_us": 10000, 00:19:09.044 "nvme_ioq_poll_period_us": 0, 00:19:09.044 "io_queue_requests": 512, 00:19:09.044 "delay_cmd_submit": true, 00:19:09.044 "transport_retry_count": 4, 00:19:09.044 "bdev_retry_count": 3, 00:19:09.044 "transport_ack_timeout": 0, 00:19:09.044 "ctrlr_loss_timeout_sec": 0, 00:19:09.044 "reconnect_delay_sec": 0, 00:19:09.044 "fast_io_fail_timeout_sec": 0, 00:19:09.044 "disable_auto_failback": false, 00:19:09.044 "generate_uuids": false, 00:19:09.044 "transport_tos": 0, 00:19:09.044 "nvme_error_stat": false, 00:19:09.044 "rdma_srq_size": 0, 00:19:09.044 "io_path_stat": false, 00:19:09.044 "allow_accel_sequence": false, 00:19:09.044 "rdma_max_cq_size": 0, 00:19:09.044 "rdma_cm_event_timeout_ms": 0, 00:19:09.044 "dhchap_digests": [ 00:19:09.044 "sha256", 00:19:09.044 "sha384", 00:19:09.044 "sha512" 00:19:09.044 ], 00:19:09.044 "dhchap_dhgroups": [ 00:19:09.044 "null", 00:19:09.044 "ffdhe2048", 00:19:09.044 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:09.044 "ffdhe3072", 00:19:09.044 "ffdhe4096", 00:19:09.044 "ffdhe6144", 00:19:09.044 "ffdhe8192" 00:19:09.044 ] 00:19:09.044 } 00:19:09.044 }, 00:19:09.044 { 00:19:09.044 "method": "bdev_nvme_attach_controller", 00:19:09.044 "params": { 00:19:09.044 "name": "TLSTEST", 00:19:09.044 "trtype": "TCP", 00:19:09.044 "adrfam": "IPv4", 00:19:09.044 "traddr": "10.0.0.2", 00:19:09.044 "trsvcid": "4420", 00:19:09.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.044 "prchk_reftag": false, 00:19:09.045 "prchk_guard": false, 00:19:09.045 "ctrlr_loss_timeout_sec": 0, 00:19:09.045 "reconnect_delay_sec": 0, 00:19:09.045 "fast_io_fail_timeout_sec": 0, 00:19:09.045 "psk": "/tmp/tmp.uHR6okePbr", 00:19:09.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.045 "hdgst": false, 00:19:09.045 "ddgst": false 00:19:09.045 } 00:19:09.045 }, 00:19:09.045 { 00:19:09.045 "method": "bdev_nvme_set_hotplug", 00:19:09.045 "params": { 00:19:09.045 "period_us": 100000, 00:19:09.045 "enable": false 00:19:09.045 } 00:19:09.045 }, 00:19:09.045 { 00:19:09.045 "method": "bdev_wait_for_examine" 00:19:09.045 } 00:19:09.045 ] 00:19:09.045 }, 00:19:09.045 { 00:19:09.045 "subsystem": "nbd", 00:19:09.045 "config": [] 00:19:09.045 } 00:19:09.045 ] 00:19:09.045 }' 00:19:09.045 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.045 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:09.045 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.305 [2024-07-24 19:55:00.668511] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:19:09.305 [2024-07-24 19:55:00.668556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082220 ] 00:19:09.305 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.305 [2024-07-24 19:55:00.717985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.305 [2024-07-24 19:55:00.796113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.564 [2024-07-24 19:55:00.939021] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:09.565 [2024-07-24 19:55:00.939102] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:10.134 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:10.134 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:10.134 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:10.134 Running I/O for 10 seconds... 00:19:20.119 00:19:20.119 Latency(us) 00:19:20.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.119 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:20.119 Verification LBA range: start 0x0 length 0x2000 00:19:20.119 TLSTESTn1 : 10.07 1351.76 5.28 0.00 0.00 94408.93 7465.41 162301.33 00:19:20.119 =================================================================================================================== 00:19:20.119 Total : 1351.76 5.28 0.00 0.00 94408.93 7465.41 162301.33 00:19:20.119 0 00:19:20.119 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:20.119 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 2082220 00:19:20.119 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2082220 ']' 00:19:20.119 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2082220 00:19:20.119 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:20.119 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:20.119 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2082220 00:19:20.119 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:20.119 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:20.119 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2082220' 00:19:20.119 killing process with pid 2082220 00:19:20.119 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2082220 00:19:20.119 Received shutdown signal, test time was about 10.000000 seconds 00:19:20.119 00:19:20.119 Latency(us) 00:19:20.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.119 =================================================================================================================== 00:19:20.119 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:20.119 [2024-07-24 19:55:11.711204] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:20.119 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2082220 00:19:20.380 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 2081995 00:19:20.380 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2081995 ']' 00:19:20.380 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2081995 00:19:20.380 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:20.380 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:20.380 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2081995 00:19:20.380 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:20.380 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:20.380 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2081995' 00:19:20.380 killing process with pid 2081995 00:19:20.380 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2081995 00:19:20.380 [2024-07-24 19:55:11.934422] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:20.380 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2081995 00:19:20.640 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:20.640 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:20.640 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:20.640 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.640 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2084061 00:19:20.640 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:20.640 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2084061 00:19:20.640 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2084061 ']' 00:19:20.640 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.640 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:20.640 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.640 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:20.640 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.640 [2024-07-24 19:55:12.183765] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:19:20.640 [2024-07-24 19:55:12.183810] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.640 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.900 [2024-07-24 19:55:12.239688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.900 [2024-07-24 19:55:12.318661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.900 [2024-07-24 19:55:12.318698] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.900 [2024-07-24 19:55:12.318705] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.900 [2024-07-24 19:55:12.318711] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.900 [2024-07-24 19:55:12.318716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.900 [2024-07-24 19:55:12.318737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.532 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:21.532 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:21.532 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:21.532 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:21.532 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.532 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.532 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.uHR6okePbr 00:19:21.532 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.uHR6okePbr 00:19:21.532 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:21.792 [2024-07-24 19:55:13.175796] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.792 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:21.792 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:22.051 [2024-07-24 19:55:13.520692] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:22.051 [2024-07-24 19:55:13.520865] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.051 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:22.311 malloc0 00:19:22.311 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:22.311 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uHR6okePbr 00:19:22.571 [2024-07-24 19:55:14.046321] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:22.571 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:22.571 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2084522 00:19:22.571 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:22.571 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2084522 /var/tmp/bdevperf.sock 00:19:22.571 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2084522 ']' 00:19:22.571 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.571 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:22.571 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.571 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:22.571 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.571 [2024-07-24 19:55:14.088963] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:19:22.571 [2024-07-24 19:55:14.089009] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084522 ] 00:19:22.571 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.571 [2024-07-24 19:55:14.141600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.830 [2024-07-24 19:55:14.221826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.830 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:22.830 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:22.830 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uHR6okePbr 00:19:23.090 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:23.090 [2024-07-24 19:55:14.652517] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.349 nvme0n1 00:19:23.349 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:23.349 Running I/O for 1 seconds... 00:19:24.730 00:19:24.730 Latency(us) 00:19:24.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.730 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:24.730 Verification LBA range: start 0x0 length 0x2000 00:19:24.730 nvme0n1 : 1.09 1261.95 4.93 0.00 0.00 98365.79 7094.98 145888.83 00:19:24.730 =================================================================================================================== 00:19:24.730 Total : 1261.95 4.93 0.00 0.00 98365.79 7094.98 145888.83 00:19:24.730 0 00:19:24.730 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 2084522 00:19:24.730 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2084522 ']' 00:19:24.730 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2084522 00:19:24.730 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:24.730 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:24.730 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2084522 00:19:24.730 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:24.730 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:24.730 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2084522' 00:19:24.730 killing process with pid 2084522 00:19:24.730 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2084522 00:19:24.730 Received shutdown signal, test time was about 1.000000 seconds 00:19:24.730 00:19:24.730 Latency(us) 00:19:24.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.730 =================================================================================================================== 00:19:24.730 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:24.730 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2084522 00:19:24.730 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 2084061 00:19:24.730 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2084061 ']' 00:19:24.730 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2084061 00:19:24.730 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:24.730 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:24.730 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2084061 00:19:24.730 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:24.730 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:24.730 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2084061' 00:19:24.730 killing process with pid 2084061 00:19:24.730 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2084061 00:19:24.730 [2024-07-24 19:55:16.241701] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:24.730 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2084061 00:19:24.990 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:19:24.990 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.990 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:24.990 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.990 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2084790 00:19:24.990 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2084790 00:19:24.990 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:24.990 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2084790 ']' 00:19:24.990 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.990 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:24.990 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.990 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:24.990 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.990 [2024-07-24 19:55:16.484887] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:19:24.990 [2024-07-24 19:55:16.484932] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.990 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.990 [2024-07-24 19:55:16.542383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.250 [2024-07-24 19:55:16.613763] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.250 [2024-07-24 19:55:16.613802] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.250 [2024-07-24 19:55:16.613808] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.250 [2024-07-24 19:55:16.613814] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.250 [2024-07-24 19:55:16.613819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.250 [2024-07-24 19:55:16.613838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.821 [2024-07-24 19:55:17.333436] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.821 malloc0 00:19:25.821 [2024-07-24 19:55:17.361559] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:25.821 [2024-07-24 19:55:17.371351] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2085036 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2085036 /var/tmp/bdevperf.sock 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2085036 ']' 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.821 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.081 [2024-07-24 19:55:17.438860] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:19:26.082 [2024-07-24 19:55:17.438902] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085036 ] 00:19:26.082 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.082 [2024-07-24 19:55:17.491531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.082 [2024-07-24 19:55:17.565275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.660 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.660 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:26.660 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uHR6okePbr 00:19:26.919 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:27.179 [2024-07-24 19:55:18.573193] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.180 nvme0n1 00:19:27.180 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:27.439 Running I/O for 1 seconds... 00:19:28.377 00:19:28.377 Latency(us) 00:19:28.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.377 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:28.377 Verification LBA range: start 0x0 length 0x2000 00:19:28.377 nvme0n1 : 1.11 1141.72 4.46 0.00 0.00 107826.41 7123.48 137682.59 00:19:28.377 =================================================================================================================== 00:19:28.377 Total : 1141.72 4.46 0.00 0.00 107826.41 7123.48 137682.59 00:19:28.377 0 00:19:28.377 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:28.377 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.377 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.637 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.638 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:28.638 "subsystems": [ 00:19:28.638 { 00:19:28.638 "subsystem": "keyring", 00:19:28.638 "config": [ 00:19:28.638 { 00:19:28.638 "method": "keyring_file_add_key", 00:19:28.638 "params": { 00:19:28.638 "name": "key0", 00:19:28.638 "path": "/tmp/tmp.uHR6okePbr" 00:19:28.638 } 00:19:28.638 } 00:19:28.638 ] 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "subsystem": "iobuf", 00:19:28.638 "config": [ 00:19:28.638 { 00:19:28.638 "method": "iobuf_set_options", 00:19:28.638 "params": { 00:19:28.638 "small_pool_count": 8192, 00:19:28.638 "large_pool_count": 1024, 00:19:28.638 "small_bufsize": 8192, 00:19:28.638 "large_bufsize": 135168 00:19:28.638 } 00:19:28.638 } 00:19:28.638 ] 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "subsystem": "sock", 00:19:28.638 "config": [ 00:19:28.638 { 00:19:28.638 "method": "sock_set_default_impl", 00:19:28.638 "params": { 00:19:28.638 "impl_name": "posix" 00:19:28.638 } 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "method": "sock_impl_set_options", 00:19:28.638 "params": { 00:19:28.638 "impl_name": "ssl", 00:19:28.638 "recv_buf_size": 4096, 00:19:28.638 "send_buf_size": 4096, 00:19:28.638 "enable_recv_pipe": true, 00:19:28.638 "enable_quickack": false, 00:19:28.638 "enable_placement_id": 0, 00:19:28.638 "enable_zerocopy_send_server": true, 00:19:28.638 "enable_zerocopy_send_client": false, 00:19:28.638 "zerocopy_threshold": 0, 00:19:28.638 "tls_version": 0, 00:19:28.638 "enable_ktls": false 00:19:28.638 } 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "method": "sock_impl_set_options", 00:19:28.638 "params": { 00:19:28.638 "impl_name": "posix", 00:19:28.638 "recv_buf_size": 2097152, 00:19:28.638 "send_buf_size": 2097152, 00:19:28.638 "enable_recv_pipe": true, 00:19:28.638 "enable_quickack": false, 00:19:28.638 "enable_placement_id": 0, 00:19:28.638 "enable_zerocopy_send_server": true, 00:19:28.638 "enable_zerocopy_send_client": false, 00:19:28.638 "zerocopy_threshold": 0, 00:19:28.638 "tls_version": 0, 00:19:28.638 "enable_ktls": false 00:19:28.638 } 00:19:28.638 } 00:19:28.638 ] 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "subsystem": "vmd", 00:19:28.638 "config": [] 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "subsystem": "accel", 00:19:28.638 "config": [ 00:19:28.638 { 00:19:28.638 "method": "accel_set_options", 00:19:28.638 "params": { 00:19:28.638 "small_cache_size": 128, 00:19:28.638 "large_cache_size": 16, 00:19:28.638 "task_count": 2048, 00:19:28.638 "sequence_count": 2048, 00:19:28.638 "buf_count": 2048 00:19:28.638 } 00:19:28.638 } 00:19:28.638 ] 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "subsystem": "bdev", 00:19:28.638 "config": [ 00:19:28.638 { 00:19:28.638 "method": "bdev_set_options", 00:19:28.638 "params": { 00:19:28.638 "bdev_io_pool_size": 65535, 00:19:28.638 "bdev_io_cache_size": 256, 00:19:28.638 "bdev_auto_examine": true, 00:19:28.638 "iobuf_small_cache_size": 128, 00:19:28.638 "iobuf_large_cache_size": 16 00:19:28.638 } 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "method": "bdev_raid_set_options", 00:19:28.638 "params": { 00:19:28.638 "process_window_size_kb": 1024, 00:19:28.638 "process_max_bandwidth_mb_sec": 0 00:19:28.638 } 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "method": "bdev_iscsi_set_options", 00:19:28.638 "params": { 00:19:28.638 "timeout_sec": 30 00:19:28.638 } 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "method": "bdev_nvme_set_options", 00:19:28.638 "params": { 00:19:28.638 "action_on_timeout": "none", 00:19:28.638 "timeout_us": 0, 00:19:28.638 "timeout_admin_us": 0, 00:19:28.638 "keep_alive_timeout_ms": 10000, 00:19:28.638 "arbitration_burst": 0, 00:19:28.638 "low_priority_weight": 0, 00:19:28.638 "medium_priority_weight": 0, 00:19:28.638 "high_priority_weight": 0, 00:19:28.638 "nvme_adminq_poll_period_us": 10000, 00:19:28.638 "nvme_ioq_poll_period_us": 0, 00:19:28.638 "io_queue_requests": 0, 00:19:28.638 "delay_cmd_submit": true, 00:19:28.638 "transport_retry_count": 4, 00:19:28.638 "bdev_retry_count": 3, 00:19:28.638 "transport_ack_timeout": 0, 00:19:28.638 "ctrlr_loss_timeout_sec": 0, 00:19:28.638 "reconnect_delay_sec": 0, 00:19:28.638 "fast_io_fail_timeout_sec": 0, 00:19:28.638 "disable_auto_failback": false, 00:19:28.638 "generate_uuids": false, 00:19:28.638 "transport_tos": 0, 00:19:28.638 "nvme_error_stat": false, 00:19:28.638 "rdma_srq_size": 0, 00:19:28.638 "io_path_stat": false, 00:19:28.638 "allow_accel_sequence": false, 00:19:28.638 "rdma_max_cq_size": 0, 00:19:28.638 "rdma_cm_event_timeout_ms": 0, 00:19:28.638 "dhchap_digests": [ 00:19:28.638 "sha256", 00:19:28.638 "sha384", 00:19:28.638 "sha512" 00:19:28.638 ], 00:19:28.638 "dhchap_dhgroups": [ 00:19:28.638 "null", 00:19:28.638 "ffdhe2048", 00:19:28.638 "ffdhe3072", 00:19:28.638 "ffdhe4096", 00:19:28.638 "ffdhe6144", 00:19:28.638 "ffdhe8192" 00:19:28.638 ] 00:19:28.638 } 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "method": "bdev_nvme_set_hotplug", 00:19:28.638 "params": { 00:19:28.638 "period_us": 100000, 00:19:28.638 "enable": false 00:19:28.638 } 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "method": "bdev_malloc_create", 00:19:28.638 "params": { 00:19:28.638 "name": "malloc0", 00:19:28.638 "num_blocks": 8192, 00:19:28.638 "block_size": 4096, 00:19:28.638 "physical_block_size": 4096, 00:19:28.638 "uuid": "89c9ca43-8b20-4604-b738-147874d4a3c2", 00:19:28.638 "optimal_io_boundary": 0, 00:19:28.638 "md_size": 0, 00:19:28.638 "dif_type": 0, 00:19:28.638 "dif_is_head_of_md": false, 00:19:28.638 "dif_pi_format": 0 00:19:28.638 } 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "method": "bdev_wait_for_examine" 00:19:28.638 } 00:19:28.638 ] 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "subsystem": "nbd", 00:19:28.638 "config": [] 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "subsystem": "scheduler", 00:19:28.638 "config": [ 00:19:28.638 { 00:19:28.638 "method": "framework_set_scheduler", 00:19:28.638 "params": { 00:19:28.638 "name": "static" 00:19:28.638 } 00:19:28.638 } 00:19:28.638 ] 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "subsystem": "nvmf", 00:19:28.638 "config": [ 00:19:28.638 { 00:19:28.638 "method": "nvmf_set_config", 00:19:28.638 "params": { 00:19:28.638 "discovery_filter": "match_any", 00:19:28.638 "admin_cmd_passthru": { 00:19:28.638 "identify_ctrlr": false 00:19:28.638 } 00:19:28.638 } 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "method": "nvmf_set_max_subsystems", 00:19:28.638 "params": { 00:19:28.638 "max_subsystems": 1024 00:19:28.638 } 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "method": "nvmf_set_crdt", 00:19:28.638 "params": { 00:19:28.638 "crdt1": 0, 00:19:28.638 "crdt2": 0, 00:19:28.638 "crdt3": 0 00:19:28.638 } 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "method": "nvmf_create_transport", 00:19:28.638 "params": { 00:19:28.638 "trtype": "TCP", 00:19:28.638 "max_queue_depth": 128, 00:19:28.638 "max_io_qpairs_per_ctrlr": 127, 00:19:28.638 "in_capsule_data_size": 4096, 00:19:28.638 "max_io_size": 131072, 00:19:28.638 "io_unit_size": 131072, 00:19:28.638 "max_aq_depth": 128, 00:19:28.638 "num_shared_buffers": 511, 00:19:28.638 "buf_cache_size": 4294967295, 00:19:28.638 "dif_insert_or_strip": false, 00:19:28.638 "zcopy": false, 00:19:28.638 "c2h_success": false, 00:19:28.638 "sock_priority": 0, 00:19:28.638 "abort_timeout_sec": 1, 00:19:28.638 "ack_timeout": 0, 00:19:28.638 "data_wr_pool_size": 0 00:19:28.638 } 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "method": "nvmf_create_subsystem", 00:19:28.638 "params": { 00:19:28.638 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.638 "allow_any_host": false, 00:19:28.638 "serial_number": "00000000000000000000", 00:19:28.638 "model_number": "SPDK bdev Controller", 00:19:28.638 "max_namespaces": 32, 00:19:28.638 "min_cntlid": 1, 00:19:28.638 "max_cntlid": 65519, 00:19:28.638 "ana_reporting": false 00:19:28.638 } 00:19:28.638 }, 00:19:28.638 { 00:19:28.638 "method": "nvmf_subsystem_add_host", 00:19:28.638 "params": { 00:19:28.638 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.638 "host": "nqn.2016-06.io.spdk:host1", 00:19:28.638 "psk": "key0" 00:19:28.638 } 00:19:28.639 }, 00:19:28.639 { 00:19:28.639 "method": "nvmf_subsystem_add_ns", 00:19:28.639 "params": { 00:19:28.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.639 "namespace": { 00:19:28.639 "nsid": 1, 00:19:28.639 "bdev_name": "malloc0", 00:19:28.639 "nguid": "89C9CA438B204604B738147874D4A3C2", 00:19:28.639 "uuid": "89c9ca43-8b20-4604-b738-147874d4a3c2", 00:19:28.639 "no_auto_visible": false 00:19:28.639 } 00:19:28.639 } 00:19:28.639 }, 00:19:28.639 { 00:19:28.639 "method": "nvmf_subsystem_add_listener", 00:19:28.639 "params": { 00:19:28.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.639 "listen_address": { 00:19:28.639 "trtype": "TCP", 00:19:28.639 "adrfam": "IPv4", 00:19:28.639 "traddr": "10.0.0.2", 00:19:28.639 "trsvcid": "4420" 00:19:28.639 }, 00:19:28.639 "secure_channel": false, 00:19:28.639 "sock_impl": "ssl" 00:19:28.639 } 00:19:28.639 } 00:19:28.639 ] 00:19:28.639 } 00:19:28.639 ] 00:19:28.639 }' 00:19:28.639 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:28.899 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:28.899 "subsystems": [ 00:19:28.899 { 00:19:28.899 "subsystem": "keyring", 00:19:28.899 "config": [ 00:19:28.899 { 00:19:28.899 "method": "keyring_file_add_key", 00:19:28.899 "params": { 00:19:28.899 "name": "key0", 00:19:28.899 "path": "/tmp/tmp.uHR6okePbr" 00:19:28.899 } 00:19:28.899 } 00:19:28.899 ] 00:19:28.899 }, 00:19:28.899 { 00:19:28.899 "subsystem": "iobuf", 00:19:28.899 "config": [ 00:19:28.899 { 00:19:28.899 "method": "iobuf_set_options", 00:19:28.899 "params": { 00:19:28.899 "small_pool_count": 8192, 00:19:28.899 "large_pool_count": 1024, 00:19:28.899 "small_bufsize": 8192, 00:19:28.899 "large_bufsize": 135168 00:19:28.899 } 00:19:28.899 } 00:19:28.899 ] 00:19:28.899 }, 00:19:28.899 { 00:19:28.899 "subsystem": "sock", 00:19:28.899 "config": [ 00:19:28.899 { 00:19:28.899 "method": "sock_set_default_impl", 00:19:28.899 "params": { 00:19:28.899 "impl_name": "posix" 00:19:28.899 } 00:19:28.899 }, 00:19:28.899 { 00:19:28.899 "method": "sock_impl_set_options", 00:19:28.899 "params": { 00:19:28.899 "impl_name": "ssl", 00:19:28.899 "recv_buf_size": 4096, 00:19:28.899 "send_buf_size": 4096, 00:19:28.899 "enable_recv_pipe": true, 00:19:28.899 "enable_quickack": false, 00:19:28.899 "enable_placement_id": 0, 00:19:28.899 "enable_zerocopy_send_server": true, 00:19:28.899 "enable_zerocopy_send_client": false, 00:19:28.899 "zerocopy_threshold": 0, 00:19:28.899 "tls_version": 0, 00:19:28.899 "enable_ktls": false 00:19:28.899 } 00:19:28.899 }, 00:19:28.899 { 00:19:28.899 "method": "sock_impl_set_options", 00:19:28.899 "params": { 00:19:28.899 "impl_name": "posix", 00:19:28.899 "recv_buf_size": 2097152, 00:19:28.899 "send_buf_size": 2097152, 00:19:28.899 "enable_recv_pipe": true, 00:19:28.899 "enable_quickack": false, 00:19:28.899 "enable_placement_id": 0, 00:19:28.899 "enable_zerocopy_send_server": true, 00:19:28.899 "enable_zerocopy_send_client": false, 00:19:28.899 "zerocopy_threshold": 0, 00:19:28.899 "tls_version": 0, 00:19:28.899 "enable_ktls": false 00:19:28.899 } 00:19:28.899 } 00:19:28.899 ] 00:19:28.899 }, 00:19:28.899 { 00:19:28.899 "subsystem": "vmd", 00:19:28.899 "config": [] 00:19:28.899 }, 00:19:28.899 { 00:19:28.899 "subsystem": "accel", 00:19:28.899 "config": [ 00:19:28.899 { 00:19:28.899 "method": "accel_set_options", 00:19:28.899 "params": { 00:19:28.899 "small_cache_size": 128, 00:19:28.899 "large_cache_size": 16, 00:19:28.899 "task_count": 2048, 00:19:28.899 "sequence_count": 2048, 00:19:28.899 "buf_count": 2048 00:19:28.899 } 00:19:28.899 } 00:19:28.899 ] 00:19:28.899 }, 00:19:28.899 { 00:19:28.899 "subsystem": "bdev", 00:19:28.899 "config": [ 00:19:28.899 { 00:19:28.899 "method": "bdev_set_options", 00:19:28.899 "params": { 00:19:28.899 "bdev_io_pool_size": 65535, 00:19:28.899 "bdev_io_cache_size": 256, 00:19:28.899 "bdev_auto_examine": true, 00:19:28.899 "iobuf_small_cache_size": 128, 00:19:28.899 "iobuf_large_cache_size": 16 00:19:28.899 } 00:19:28.899 }, 00:19:28.899 { 00:19:28.899 "method": "bdev_raid_set_options", 00:19:28.899 "params": { 00:19:28.899 "process_window_size_kb": 1024, 00:19:28.899 "process_max_bandwidth_mb_sec": 0 00:19:28.899 } 00:19:28.899 }, 00:19:28.900 { 00:19:28.900 "method": "bdev_iscsi_set_options", 00:19:28.900 "params": { 00:19:28.900 "timeout_sec": 30 00:19:28.900 } 00:19:28.900 }, 00:19:28.900 { 00:19:28.900 "method": "bdev_nvme_set_options", 00:19:28.900 "params": { 00:19:28.900 "action_on_timeout": "none", 00:19:28.900 "timeout_us": 0, 00:19:28.900 "timeout_admin_us": 0, 00:19:28.900 "keep_alive_timeout_ms": 10000, 00:19:28.900 "arbitration_burst": 0, 00:19:28.900 "low_priority_weight": 0, 00:19:28.900 "medium_priority_weight": 0, 00:19:28.900 "high_priority_weight": 0, 00:19:28.900 "nvme_adminq_poll_period_us": 10000, 00:19:28.900 "nvme_ioq_poll_period_us": 0, 00:19:28.900 "io_queue_requests": 512, 00:19:28.900 "delay_cmd_submit": true, 00:19:28.900 "transport_retry_count": 4, 00:19:28.900 "bdev_retry_count": 3, 00:19:28.900 "transport_ack_timeout": 0, 00:19:28.900 "ctrlr_loss_timeout_sec": 0, 00:19:28.900 "reconnect_delay_sec": 0, 00:19:28.900 "fast_io_fail_timeout_sec": 0, 00:19:28.900 "disable_auto_failback": false, 00:19:28.900 "generate_uuids": false, 00:19:28.900 "transport_tos": 0, 00:19:28.900 "nvme_error_stat": false, 00:19:28.900 "rdma_srq_size": 0, 00:19:28.900 "io_path_stat": false, 00:19:28.900 "allow_accel_sequence": false, 00:19:28.900 "rdma_max_cq_size": 0, 00:19:28.900 "rdma_cm_event_timeout_ms": 0, 00:19:28.900 "dhchap_digests": [ 00:19:28.900 "sha256", 00:19:28.900 "sha384", 00:19:28.900 "sha512" 00:19:28.900 ], 00:19:28.900 "dhchap_dhgroups": [ 00:19:28.900 "null", 00:19:28.900 "ffdhe2048", 00:19:28.900 "ffdhe3072", 00:19:28.900 "ffdhe4096", 00:19:28.900 "ffdhe6144", 00:19:28.900 "ffdhe8192" 00:19:28.900 ] 00:19:28.900 } 00:19:28.900 }, 00:19:28.900 { 00:19:28.900 "method": "bdev_nvme_attach_controller", 00:19:28.900 "params": { 00:19:28.900 "name": "nvme0", 00:19:28.900 "trtype": "TCP", 00:19:28.900 "adrfam": "IPv4", 00:19:28.900 "traddr": "10.0.0.2", 00:19:28.900 "trsvcid": "4420", 00:19:28.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.900 "prchk_reftag": false, 00:19:28.900 "prchk_guard": false, 00:19:28.900 "ctrlr_loss_timeout_sec": 0, 00:19:28.900 "reconnect_delay_sec": 0, 00:19:28.900 "fast_io_fail_timeout_sec": 0, 00:19:28.900 "psk": "key0", 00:19:28.900 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:28.900 "hdgst": false, 00:19:28.900 "ddgst": false 00:19:28.900 } 00:19:28.900 }, 00:19:28.900 { 00:19:28.900 "method": "bdev_nvme_set_hotplug", 00:19:28.900 "params": { 00:19:28.900 "period_us": 100000, 00:19:28.900 "enable": false 00:19:28.900 } 00:19:28.900 }, 00:19:28.900 { 00:19:28.900 "method": "bdev_enable_histogram", 00:19:28.900 "params": { 00:19:28.900 "name": "nvme0n1", 00:19:28.900 "enable": true 00:19:28.900 } 00:19:28.900 }, 00:19:28.900 { 00:19:28.900 "method": "bdev_wait_for_examine" 00:19:28.900 } 00:19:28.900 ] 00:19:28.900 }, 00:19:28.900 { 00:19:28.900 "subsystem": "nbd", 00:19:28.900 "config": [] 00:19:28.900 } 00:19:28.900 ] 00:19:28.900 }' 00:19:28.900 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 2085036 00:19:28.900 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2085036 ']' 00:19:28.900 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2085036 00:19:28.900 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:28.900 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:28.900 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2085036 00:19:28.900 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:28.900 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:28.900 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2085036' 00:19:28.900 killing process with pid 2085036 00:19:28.900 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2085036 00:19:28.900 Received shutdown signal, test time was about 1.000000 seconds 00:19:28.900 00:19:28.900 Latency(us) 00:19:28.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.900 =================================================================================================================== 00:19:28.900 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.900 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2085036 00:19:28.900 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 2084790 00:19:28.900 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2084790 ']' 00:19:28.900 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2084790 00:19:28.900 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:29.161 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:29.161 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2084790 00:19:29.161 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:29.161 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:29.161 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2084790' 00:19:29.161 killing process with pid 2084790 00:19:29.161 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2084790 00:19:29.161 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2084790 00:19:29.161 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:19:29.161 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:29.161 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:19:29.161 "subsystems": [ 00:19:29.161 { 00:19:29.161 "subsystem": "keyring", 00:19:29.161 "config": [ 00:19:29.161 { 00:19:29.161 "method": "keyring_file_add_key", 00:19:29.161 "params": { 00:19:29.161 "name": "key0", 00:19:29.161 "path": "/tmp/tmp.uHR6okePbr" 00:19:29.161 } 00:19:29.161 } 00:19:29.161 ] 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "subsystem": "iobuf", 00:19:29.161 "config": [ 00:19:29.161 { 00:19:29.161 "method": "iobuf_set_options", 00:19:29.161 "params": { 00:19:29.161 "small_pool_count": 8192, 00:19:29.161 "large_pool_count": 1024, 00:19:29.161 "small_bufsize": 8192, 00:19:29.161 "large_bufsize": 135168 00:19:29.161 } 00:19:29.161 } 00:19:29.161 ] 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "subsystem": "sock", 00:19:29.161 "config": [ 00:19:29.161 { 00:19:29.161 "method": "sock_set_default_impl", 00:19:29.161 "params": { 00:19:29.161 "impl_name": "posix" 00:19:29.161 } 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "method": "sock_impl_set_options", 00:19:29.161 "params": { 00:19:29.161 "impl_name": "ssl", 00:19:29.161 "recv_buf_size": 4096, 00:19:29.161 "send_buf_size": 4096, 00:19:29.161 "enable_recv_pipe": true, 00:19:29.161 "enable_quickack": false, 00:19:29.161 "enable_placement_id": 0, 00:19:29.161 "enable_zerocopy_send_server": true, 00:19:29.161 "enable_zerocopy_send_client": false, 00:19:29.161 "zerocopy_threshold": 0, 00:19:29.161 "tls_version": 0, 00:19:29.161 "enable_ktls": false 00:19:29.161 } 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "method": "sock_impl_set_options", 00:19:29.161 "params": { 00:19:29.161 "impl_name": "posix", 00:19:29.161 "recv_buf_size": 2097152, 00:19:29.161 "send_buf_size": 2097152, 00:19:29.161 "enable_recv_pipe": true, 00:19:29.161 "enable_quickack": false, 00:19:29.161 "enable_placement_id": 0, 00:19:29.161 "enable_zerocopy_send_server": true, 00:19:29.161 "enable_zerocopy_send_client": false, 00:19:29.161 "zerocopy_threshold": 0, 00:19:29.161 "tls_version": 0, 00:19:29.161 "enable_ktls": false 00:19:29.161 } 00:19:29.161 } 00:19:29.161 ] 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "subsystem": "vmd", 00:19:29.161 "config": [] 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "subsystem": "accel", 00:19:29.161 "config": [ 00:19:29.161 { 00:19:29.161 "method": "accel_set_options", 00:19:29.161 "params": { 00:19:29.161 "small_cache_size": 128, 00:19:29.161 "large_cache_size": 16, 00:19:29.161 "task_count": 2048, 00:19:29.161 "sequence_count": 2048, 00:19:29.161 "buf_count": 2048 00:19:29.161 } 00:19:29.161 } 00:19:29.161 ] 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "subsystem": "bdev", 00:19:29.161 "config": [ 00:19:29.161 { 00:19:29.161 "method": "bdev_set_options", 00:19:29.161 "params": { 00:19:29.161 "bdev_io_pool_size": 65535, 00:19:29.161 "bdev_io_cache_size": 256, 00:19:29.161 "bdev_auto_examine": true, 00:19:29.161 "iobuf_small_cache_size": 128, 00:19:29.161 "iobuf_large_cache_size": 16 00:19:29.161 } 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "method": "bdev_raid_set_options", 00:19:29.161 "params": { 00:19:29.161 "process_window_size_kb": 1024, 00:19:29.161 "process_max_bandwidth_mb_sec": 0 00:19:29.161 } 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "method": "bdev_iscsi_set_options", 00:19:29.161 "params": { 00:19:29.161 "timeout_sec": 30 00:19:29.161 } 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "method": "bdev_nvme_set_options", 00:19:29.161 "params": { 00:19:29.161 "action_on_timeout": "none", 00:19:29.161 "timeout_us": 0, 00:19:29.161 "timeout_admin_us": 0, 00:19:29.161 "keep_alive_timeout_ms": 10000, 00:19:29.161 "arbitration_burst": 0, 00:19:29.161 "low_priority_weight": 0, 00:19:29.161 "medium_priority_weight": 0, 00:19:29.161 "high_priority_weight": 0, 00:19:29.161 "nvme_adminq_poll_period_us": 10000, 00:19:29.161 "nvme_ioq_poll_period_us": 0, 00:19:29.161 "io_queue_requests": 0, 00:19:29.161 "delay_cmd_submit": true, 00:19:29.161 "transport_retry_count": 4, 00:19:29.161 "bdev_retry_count": 3, 00:19:29.161 "transport_ack_timeout": 0, 00:19:29.161 "ctrlr_loss_timeout_sec": 0, 00:19:29.161 "reconnect_delay_sec": 0, 00:19:29.161 "fast_io_fail_timeout_sec": 0, 00:19:29.161 "disable_auto_failback": false, 00:19:29.161 "generate_uuids": false, 00:19:29.161 "transport_tos": 0, 00:19:29.161 "nvme_error_stat": false, 00:19:29.161 "rdma_srq_size": 0, 00:19:29.161 "io_path_stat": false, 00:19:29.161 "allow_accel_sequence": false, 00:19:29.161 "rdma_max_cq_size": 0, 00:19:29.161 "rdma_cm_event_timeout_ms": 0, 00:19:29.161 "dhchap_digests": [ 00:19:29.161 "sha256", 00:19:29.161 "sha384", 00:19:29.161 "sha512" 00:19:29.161 ], 00:19:29.161 "dhchap_dhgroups": [ 00:19:29.161 "null", 00:19:29.161 "ffdhe2048", 00:19:29.161 "ffdhe3072", 00:19:29.161 "ffdhe4096", 00:19:29.161 "ffdhe6144", 00:19:29.161 "ffdhe8192" 00:19:29.161 ] 00:19:29.161 } 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "method": "bdev_nvme_set_hotplug", 00:19:29.161 "params": { 00:19:29.161 "period_us": 100000, 00:19:29.161 "enable": false 00:19:29.161 } 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "method": "bdev_malloc_create", 00:19:29.161 "params": { 00:19:29.161 "name": "malloc0", 00:19:29.161 "num_blocks": 8192, 00:19:29.161 "block_size": 4096, 00:19:29.161 "physical_block_size": 4096, 00:19:29.161 "uuid": "89c9ca43-8b20-4604-b738-147874d4a3c2", 00:19:29.161 "optimal_io_boundary": 0, 00:19:29.161 "md_size": 0, 00:19:29.161 "dif_type": 0, 00:19:29.161 "dif_is_head_of_md": false, 00:19:29.161 "dif_pi_format": 0 00:19:29.161 } 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "method": "bdev_wait_for_examine" 00:19:29.161 } 00:19:29.161 ] 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "subsystem": "nbd", 00:19:29.161 "config": [] 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "subsystem": "scheduler", 00:19:29.161 "config": [ 00:19:29.161 { 00:19:29.161 "method": "framework_set_scheduler", 00:19:29.161 "params": { 00:19:29.161 "name": "static" 00:19:29.161 } 00:19:29.161 } 00:19:29.161 ] 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "subsystem": "nvmf", 00:19:29.161 "config": [ 00:19:29.161 { 00:19:29.161 "method": "nvmf_set_config", 00:19:29.161 "params": { 00:19:29.161 "discovery_filter": "match_any", 00:19:29.161 "admin_cmd_passthru": { 00:19:29.161 "identify_ctrlr": false 00:19:29.161 } 00:19:29.161 } 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "method": "nvmf_set_max_subsystems", 00:19:29.161 "params": { 00:19:29.161 "max_subsystems": 1024 00:19:29.161 } 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "method": "nvmf_set_crdt", 00:19:29.161 "params": { 00:19:29.161 "crdt1": 0, 00:19:29.161 "crdt2": 0, 00:19:29.161 "crdt3": 0 00:19:29.161 } 00:19:29.161 }, 00:19:29.161 { 00:19:29.161 "method": "nvmf_create_transport", 00:19:29.161 "params": { 00:19:29.161 "trtype": "TCP", 00:19:29.161 "max_queue_depth": 128, 00:19:29.161 "max_io_qpairs_per_ctrlr": 127, 00:19:29.161 "in_capsule_data_size": 4096, 00:19:29.161 "max_io_size": 131072, 00:19:29.161 "io_unit_size": 131072, 00:19:29.161 "max_aq_depth": 128, 00:19:29.161 "num_shared_buffers": 511, 00:19:29.161 "buf_cache_size": 4294967295, 00:19:29.161 "dif_insert_or_strip": false, 00:19:29.161 "zcopy": false, 00:19:29.161 "c2h_success": false, 00:19:29.161 "sock_priority": 0, 00:19:29.161 "abort_timeout_sec": 1, 00:19:29.161 " 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:29.161 ack_timeout": 0, 00:19:29.162 "data_wr_pool_size": 0 00:19:29.162 } 00:19:29.162 }, 00:19:29.162 { 00:19:29.162 "method": "nvmf_create_subsystem", 00:19:29.162 "params": { 00:19:29.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.162 "allow_any_host": false, 00:19:29.162 "serial_number": "00000000000000000000", 00:19:29.162 "model_number": "SPDK bdev Controller", 00:19:29.162 "max_namespaces": 32, 00:19:29.162 "min_cntlid": 1, 00:19:29.162 "max_cntlid": 65519, 00:19:29.162 "ana_reporting": false 00:19:29.162 } 00:19:29.162 }, 00:19:29.162 { 00:19:29.162 "method": "nvmf_subsystem_add_host", 00:19:29.162 "params": { 00:19:29.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.162 "host": "nqn.2016-06.io.spdk:host1", 00:19:29.162 "psk": "key0" 00:19:29.162 } 00:19:29.162 }, 00:19:29.162 { 00:19:29.162 "method": "nvmf_subsystem_add_ns", 00:19:29.162 "params": { 00:19:29.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.162 "namespace": { 00:19:29.162 "nsid": 1, 00:19:29.162 "bdev_name": "malloc0", 00:19:29.162 "nguid": "89C9CA438B204604B738147874D4A3C2", 00:19:29.162 "uuid": "89c9ca43-8b20-4604-b738-147874d4a3c2", 00:19:29.162 "no_auto_visible": false 00:19:29.162 } 00:19:29.162 } 00:19:29.162 }, 00:19:29.162 { 00:19:29.162 "method": "nvmf_subsystem_add_listener", 00:19:29.162 "params": { 00:19:29.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.162 "listen_address": { 00:19:29.162 "trtype": "TCP", 00:19:29.162 "adrfam": "IPv4", 00:19:29.162 "traddr": "10.0.0.2", 00:19:29.162 "trsvcid": "4420" 00:19:29.162 }, 00:19:29.162 "secure_channel": false, 00:19:29.162 "sock_impl": "ssl" 00:19:29.162 } 00:19:29.162 } 00:19:29.162 ] 00:19:29.162 } 00:19:29.162 ] 00:19:29.162 }' 00:19:29.162 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.162 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2085526 00:19:29.162 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2085526 00:19:29.162 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:29.162 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2085526 ']' 00:19:29.162 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.162 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:29.162 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.162 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:29.162 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.422 [2024-07-24 19:55:20.792250] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:19:29.422 [2024-07-24 19:55:20.792297] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.422 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.422 [2024-07-24 19:55:20.848495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.422 [2024-07-24 19:55:20.920725] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.423 [2024-07-24 19:55:20.920764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.423 [2024-07-24 19:55:20.920772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.423 [2024-07-24 19:55:20.920777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.423 [2024-07-24 19:55:20.920782] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.423 [2024-07-24 19:55:20.920831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.690 [2024-07-24 19:55:21.132903] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.690 [2024-07-24 19:55:21.171084] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:29.690 [2024-07-24 19:55:21.171251] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.260 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:30.260 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:30.260 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:30.260 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:30.260 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.260 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.260 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2085769 00:19:30.260 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2085769 /var/tmp/bdevperf.sock 00:19:30.260 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2085769 ']' 00:19:30.260 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.260 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:30.260 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.260 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.260 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:19:30.260 "subsystems": [ 00:19:30.260 { 00:19:30.260 "subsystem": "keyring", 00:19:30.260 "config": [ 00:19:30.260 { 00:19:30.260 "method": "keyring_file_add_key", 00:19:30.260 "params": { 00:19:30.260 "name": "key0", 00:19:30.260 "path": "/tmp/tmp.uHR6okePbr" 00:19:30.260 } 00:19:30.260 } 00:19:30.260 ] 00:19:30.260 }, 00:19:30.260 { 00:19:30.260 "subsystem": "iobuf", 00:19:30.260 "config": [ 00:19:30.260 { 00:19:30.260 "method": "iobuf_set_options", 00:19:30.260 "params": { 00:19:30.260 "small_pool_count": 8192, 00:19:30.260 "large_pool_count": 1024, 00:19:30.260 "small_bufsize": 8192, 00:19:30.261 "large_bufsize": 135168 00:19:30.261 } 00:19:30.261 } 00:19:30.261 ] 00:19:30.261 }, 00:19:30.261 { 00:19:30.261 "subsystem": "sock", 00:19:30.261 "config": [ 00:19:30.261 { 00:19:30.261 "method": "sock_set_default_impl", 00:19:30.261 "params": { 00:19:30.261 "impl_name": "posix" 00:19:30.261 } 00:19:30.261 }, 00:19:30.261 { 00:19:30.261 "method": "sock_impl_set_options", 00:19:30.261 "params": { 00:19:30.261 "impl_name": "ssl", 00:19:30.261 "recv_buf_size": 4096, 00:19:30.261 "send_buf_size": 4096, 00:19:30.261 "enable_recv_pipe": true, 00:19:30.261 "enable_quickack": false, 00:19:30.261 "enable_placement_id": 0, 00:19:30.261 "enable_zerocopy_send_server": true, 00:19:30.261 "enable_zerocopy_send_client": false, 00:19:30.261 "zerocopy_threshold": 0, 00:19:30.261 "tls_version": 0, 00:19:30.261 "enable_ktls": false 00:19:30.261 } 00:19:30.261 }, 00:19:30.261 { 00:19:30.261 "method": "sock_impl_set_options", 00:19:30.261 "params": { 00:19:30.261 "impl_name": "posix", 00:19:30.261 "recv_buf_size": 2097152, 00:19:30.261 "send_buf_size": 2097152, 00:19:30.261 "enable_recv_pipe": true, 00:19:30.261 "enable_quickack": false, 00:19:30.261 "enable_placement_id": 0, 00:19:30.261 "enable_zerocopy_send_server": true, 00:19:30.261 "enable_zerocopy_send_client": false, 00:19:30.261 "zerocopy_threshold": 0, 00:19:30.261 "tls_version": 0, 00:19:30.261 "enable_ktls": false 00:19:30.261 } 00:19:30.261 } 00:19:30.261 ] 00:19:30.261 }, 00:19:30.261 { 00:19:30.261 "subsystem": "vmd", 00:19:30.261 "config": [] 00:19:30.261 }, 00:19:30.261 { 00:19:30.261 "subsystem": "accel", 00:19:30.261 "config": [ 00:19:30.261 { 00:19:30.261 "method": "accel_set_options", 00:19:30.261 "params": { 00:19:30.261 "small_cache_size": 128, 00:19:30.261 "large_cache_size": 16, 00:19:30.261 "task_count": 2048, 00:19:30.261 "sequence_count": 2048, 00:19:30.261 "buf_count": 2048 00:19:30.261 } 00:19:30.261 } 00:19:30.261 ] 00:19:30.261 }, 00:19:30.261 { 00:19:30.261 "subsystem": "bdev", 00:19:30.261 "config": [ 00:19:30.261 { 00:19:30.261 "method": "bdev_set_options", 00:19:30.261 "params": { 00:19:30.261 "bdev_io_pool_size": 65535, 00:19:30.261 "bdev_io_cache_size": 256, 00:19:30.261 "bdev_auto_examine": true, 00:19:30.261 "iobuf_small_cache_size": 128, 00:19:30.261 "iobuf_large_cache_size": 16 00:19:30.261 } 00:19:30.261 }, 00:19:30.261 { 00:19:30.261 "method": "bdev_raid_set_options", 00:19:30.261 "params": { 00:19:30.261 "process_window_size_kb": 1024, 00:19:30.261 "process_max_bandwidth_mb_sec": 0 00:19:30.261 } 00:19:30.261 }, 00:19:30.261 { 00:19:30.261 "method": "bdev_iscsi_set_options", 00:19:30.261 "params": { 00:19:30.261 "timeout_sec": 30 00:19:30.261 } 00:19:30.261 }, 00:19:30.261 { 00:19:30.261 "method": "bdev_nvme_set_options", 00:19:30.261 "params": { 00:19:30.261 "action_on_timeout": "none", 00:19:30.261 "timeout_us": 0, 00:19:30.261 "timeout_admin_us": 0, 00:19:30.261 "keep_alive_timeout_ms": 10000, 00:19:30.261 "arbitration_burst": 0, 00:19:30.261 "low_priority_weight": 0, 00:19:30.261 "medium_priority_weight": 0, 00:19:30.261 "high_priority_weight": 0, 00:19:30.261 "nvme_adminq_poll_period_us": 10000, 00:19:30.261 "nvme_ioq_poll_period_us": 0, 00:19:30.261 "io_queue_requests": 512, 00:19:30.261 "delay_cmd_submit": true, 00:19:30.261 "transport_retry_count": 4, 00:19:30.261 "bdev_retry_count": 3, 00:19:30.261 "transport_ack_timeout": 0, 00:19:30.261 "ctrlr_loss_timeout_sec": 0, 00:19:30.261 "reconnect_delay_sec": 0, 00:19:30.261 "fast_io_fail_timeout_sec": 0, 00:19:30.261 "disable_auto_failback": false, 00:19:30.261 "generate_uuids": false, 00:19:30.261 "transport_tos": 0, 00:19:30.261 "nvme_error_stat": false, 00:19:30.261 "rdma_srq_size": 0, 00:19:30.261 "io_path_stat": false, 00:19:30.261 "allow_accel_sequence": false, 00:19:30.261 "rdma_max_cq_size": 0, 00:19:30.261 "rdma_cm_event_timeout_ms": 0, 00:19:30.261 "dhchap_digests": [ 00:19:30.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.261 "sha256", 00:19:30.261 "sha384", 00:19:30.261 "sha512" 00:19:30.261 ], 00:19:30.261 "dhchap_dhgroups": [ 00:19:30.261 "null", 00:19:30.261 "ffdhe2048", 00:19:30.261 "ffdhe3072", 00:19:30.261 "ffdhe4096", 00:19:30.261 "ffdhe6144", 00:19:30.261 "ffdhe8192" 00:19:30.261 ] 00:19:30.261 } 00:19:30.261 }, 00:19:30.261 { 00:19:30.261 "method": "bdev_nvme_attach_controller", 00:19:30.261 "params": { 00:19:30.261 "name": "nvme0", 00:19:30.261 "trtype": "TCP", 00:19:30.261 "adrfam": "IPv4", 00:19:30.261 "traddr": "10.0.0.2", 00:19:30.261 "trsvcid": "4420", 00:19:30.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.261 "prchk_reftag": false, 00:19:30.261 "prchk_guard": false, 00:19:30.261 "ctrlr_loss_timeout_sec": 0, 00:19:30.261 "reconnect_delay_sec": 0, 00:19:30.261 "fast_io_fail_timeout_sec": 0, 00:19:30.261 "psk": "key0", 00:19:30.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:30.261 "hdgst": false, 00:19:30.261 "ddgst": false 00:19:30.261 } 00:19:30.261 }, 00:19:30.261 { 00:19:30.261 "method": "bdev_nvme_set_hotplug", 00:19:30.261 "params": { 00:19:30.261 "period_us": 100000, 00:19:30.261 "enable": false 00:19:30.261 } 00:19:30.261 }, 00:19:30.261 { 00:19:30.261 "method": "bdev_enable_histogram", 00:19:30.261 "params": { 00:19:30.261 "name": "nvme0n1", 00:19:30.261 "enable": true 00:19:30.261 } 00:19:30.261 }, 00:19:30.261 { 00:19:30.261 "method": "bdev_wait_for_examine" 00:19:30.261 } 00:19:30.261 ] 00:19:30.261 }, 00:19:30.261 { 00:19:30.261 "subsystem": "nbd", 00:19:30.261 "config": [] 00:19:30.261 } 00:19:30.261 ] 00:19:30.261 }' 00:19:30.261 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.261 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.261 [2024-07-24 19:55:21.689620] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:19:30.261 [2024-07-24 19:55:21.689667] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085769 ] 00:19:30.261 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.261 [2024-07-24 19:55:21.743296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.261 [2024-07-24 19:55:21.816628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.521 [2024-07-24 19:55:21.968335] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.090 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.090 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:31.090 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:31.090 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:19:31.091 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.091 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:31.350 Running I/O for 1 seconds... 00:19:32.286 00:19:32.286 Latency(us) 00:19:32.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.286 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:32.286 Verification LBA range: start 0x0 length 0x2000 00:19:32.286 nvme0n1 : 1.08 1155.69 4.51 0.00 0.00 107720.87 5898.24 138594.39 00:19:32.286 =================================================================================================================== 00:19:32.286 Total : 1155.69 4.51 0.00 0.00 107720.87 5898.24 138594.39 00:19:32.286 0 00:19:32.286 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:19:32.286 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:19:32.286 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:32.286 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:19:32.286 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:19:32.286 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:32.286 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:32.286 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:32.286 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:32.286 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:32.286 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:32.286 nvmf_trace.0 00:19:32.545 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:19:32.545 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2085769 00:19:32.545 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2085769 ']' 00:19:32.545 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2085769 00:19:32.545 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:32.545 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.545 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2085769 00:19:32.545 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:32.545 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:32.545 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2085769' 00:19:32.545 killing process with pid 2085769 00:19:32.545 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2085769 00:19:32.545 Received shutdown signal, test time was about 1.000000 seconds 00:19:32.545 00:19:32.545 Latency(us) 00:19:32.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.545 =================================================================================================================== 00:19:32.545 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.545 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2085769 00:19:32.545 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:32.545 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:32.545 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:32.545 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:32.545 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:32.545 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:32.545 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:32.805 rmmod nvme_tcp 00:19:32.805 rmmod nvme_fabrics 00:19:32.805 rmmod nvme_keyring 00:19:32.805 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:32.805 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:32.805 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:32.805 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2085526 ']' 00:19:32.805 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2085526 00:19:32.805 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2085526 ']' 00:19:32.805 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2085526 00:19:32.805 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:32.805 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.805 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2085526 00:19:32.805 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:32.805 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:32.805 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2085526' 00:19:32.805 killing process with pid 2085526 00:19:32.805 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2085526 00:19:32.805 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2085526 00:19:33.065 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:33.065 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:33.065 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:33.065 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:33.065 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:33.065 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.065 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.065 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.975 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:34.975 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.8jBtd4e8Od /tmp/tmp.lMkrdMqnG4 /tmp/tmp.uHR6okePbr 00:19:34.975 00:19:34.975 real 1m24.865s 00:19:34.975 user 2m12.964s 00:19:34.975 sys 0m26.834s 00:19:34.975 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:34.975 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.975 ************************************ 00:19:34.975 END TEST nvmf_tls 00:19:34.975 ************************************ 00:19:34.975 19:55:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:34.975 19:55:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:34.975 19:55:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:34.975 19:55:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:34.975 ************************************ 00:19:34.975 START TEST nvmf_fips 00:19:34.975 ************************************ 00:19:34.975 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:35.236 * Looking for test storage... 00:19:35.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:35.237 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:19:35.238 Error setting digest 00:19:35.238 00E21A7F3A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:35.238 00E21A7F3A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.238 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.498 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:35.498 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:35.498 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:35.498 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.824 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:40.824 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:40.825 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:40.825 Found net devices under 0000:86:00.0: cvl_0_0 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:40.825 Found net devices under 0000:86:00.1: cvl_0_1 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.825 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:40.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:19:40.825 00:19:40.825 --- 10.0.0.2 ping statistics --- 00:19:40.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.825 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:19:40.825 00:19:40.825 --- 10.0.0.1 ping statistics --- 00:19:40.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.825 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2089595 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2089595 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2089595 ']' 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.825 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:40.825 [2024-07-24 19:55:32.226353] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:19:40.826 [2024-07-24 19:55:32.226399] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.826 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.826 [2024-07-24 19:55:32.283041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.826 [2024-07-24 19:55:32.359669] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.826 [2024-07-24 19:55:32.359703] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.826 [2024-07-24 19:55:32.359710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.826 [2024-07-24 19:55:32.359716] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.826 [2024-07-24 19:55:32.359721] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.826 [2024-07-24 19:55:32.359753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:41.766 [2024-07-24 19:55:33.207874] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.766 [2024-07-24 19:55:33.223881] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:41.766 [2024-07-24 19:55:33.224058] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.766 [2024-07-24 19:55:33.252192] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:41.766 malloc0 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2089819 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2089819 /var/tmp/bdevperf.sock 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2089819 ']' 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.766 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:41.766 [2024-07-24 19:55:33.330616] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:19:41.766 [2024-07-24 19:55:33.330664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089819 ] 00:19:41.766 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.024 [2024-07-24 19:55:33.380034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.024 [2024-07-24 19:55:33.453108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.594 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:42.594 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:19:42.594 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:42.854 [2024-07-24 19:55:34.266941] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.854 [2024-07-24 19:55:34.267020] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:42.854 TLSTESTn1 00:19:42.854 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:43.143 Running I/O for 10 seconds... 00:19:53.122 00:19:53.122 Latency(us) 00:19:53.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.122 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:53.122 Verification LBA range: start 0x0 length 0x2000 00:19:53.122 TLSTESTn1 : 10.09 1320.95 5.16 0.00 0.00 96504.26 6867.03 151359.67 00:19:53.122 =================================================================================================================== 00:19:53.122 Total : 1320.95 5.16 0.00 0.00 96504.26 6867.03 151359.67 00:19:53.122 0 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:53.122 nvmf_trace.0 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2089819 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2089819 ']' 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2089819 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:53.122 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2089819 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2089819' 00:19:53.382 killing process with pid 2089819 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2089819 00:19:53.382 Received shutdown signal, test time was about 10.000000 seconds 00:19:53.382 00:19:53.382 Latency(us) 00:19:53.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.382 =================================================================================================================== 00:19:53.382 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.382 [2024-07-24 19:55:44.730074] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2089819 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.382 rmmod nvme_tcp 00:19:53.382 rmmod nvme_fabrics 00:19:53.382 rmmod nvme_keyring 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2089595 ']' 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2089595 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2089595 ']' 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2089595 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:53.382 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2089595 00:19:53.642 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:53.642 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:53.642 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2089595' 00:19:53.642 killing process with pid 2089595 00:19:53.642 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2089595 00:19:53.642 [2024-07-24 19:55:45.012816] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:53.642 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2089595 00:19:53.642 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:53.642 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:53.642 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:53.642 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.642 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:53.642 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.642 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.642 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.182 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:56.182 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:56.182 00:19:56.182 real 0m20.714s 00:19:56.182 user 0m23.358s 00:19:56.182 sys 0m8.238s 00:19:56.182 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:56.182 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:56.182 ************************************ 00:19:56.182 END TEST nvmf_fips 00:19:56.182 ************************************ 00:19:56.182 19:55:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:19:56.182 19:55:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:19:56.182 19:55:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:19:56.182 19:55:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:19:56.182 19:55:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:19:56.182 19:55:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:01.463 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:01.463 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:01.463 Found net devices under 0000:86:00.0: cvl_0_0 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:01.463 Found net devices under 0000:86:00.1: cvl_0_1 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.463 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:01.464 ************************************ 00:20:01.464 START TEST nvmf_perf_adq 00:20:01.464 ************************************ 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:01.464 * Looking for test storage... 00:20:01.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:01.464 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.744 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:06.745 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:06.745 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:06.745 Found net devices under 0000:86:00.0: cvl_0_0 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:06.745 Found net devices under 0000:86:00.1: cvl_0_1 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:06.745 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:07.748 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:09.670 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.951 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:14.952 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:14.952 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:14.952 Found net devices under 0000:86:00.0: cvl_0_0 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:14.952 Found net devices under 0000:86:00.1: cvl_0_1 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:14.952 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:14.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:20:14.952 00:20:14.952 --- 10.0.0.2 ping statistics --- 00:20:14.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.952 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:20:14.952 00:20:14.952 --- 10.0.0.1 ping statistics --- 00:20:14.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.952 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:14.952 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.953 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:14.953 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2099656 00:20:14.953 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2099656 00:20:14.953 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2099656 ']' 00:20:14.953 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.953 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:14.953 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.953 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:14.953 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.953 [2024-07-24 19:56:06.147811] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:20:14.953 [2024-07-24 19:56:06.147853] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.953 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.953 [2024-07-24 19:56:06.205901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.953 [2024-07-24 19:56:06.280920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.953 [2024-07-24 19:56:06.280961] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.953 [2024-07-24 19:56:06.280968] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.953 [2024-07-24 19:56:06.280974] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.953 [2024-07-24 19:56:06.280979] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.953 [2024-07-24 19:56:06.281039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.953 [2024-07-24 19:56:06.281135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.953 [2024-07-24 19:56:06.281334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.953 [2024-07-24 19:56:06.281336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.524 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:15.524 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:20:15.524 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:15.524 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:15.524 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.524 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.524 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:15.524 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:15.524 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:15.524 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.524 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.524 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.524 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:15.524 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:15.524 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.524 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.524 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.524 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:15.524 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.524 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.785 [2024-07-24 19:56:07.161545] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.785 Malloc1 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.785 [2024-07-24 19:56:07.217408] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2099780 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:15.785 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:15.785 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.695 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:17.695 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.695 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.695 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.695 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:17.695 "tick_rate": 2300000000, 00:20:17.695 "poll_groups": [ 00:20:17.695 { 00:20:17.695 "name": "nvmf_tgt_poll_group_000", 00:20:17.695 "admin_qpairs": 1, 00:20:17.695 "io_qpairs": 1, 00:20:17.695 "current_admin_qpairs": 1, 00:20:17.695 "current_io_qpairs": 1, 00:20:17.695 "pending_bdev_io": 0, 00:20:17.695 "completed_nvme_io": 20230, 00:20:17.695 "transports": [ 00:20:17.695 { 00:20:17.695 "trtype": "TCP" 00:20:17.695 } 00:20:17.695 ] 00:20:17.695 }, 00:20:17.695 { 00:20:17.695 "name": "nvmf_tgt_poll_group_001", 00:20:17.695 "admin_qpairs": 0, 00:20:17.695 "io_qpairs": 1, 00:20:17.695 "current_admin_qpairs": 0, 00:20:17.695 "current_io_qpairs": 1, 00:20:17.695 "pending_bdev_io": 0, 00:20:17.695 "completed_nvme_io": 20932, 00:20:17.695 "transports": [ 00:20:17.695 { 00:20:17.695 "trtype": "TCP" 00:20:17.695 } 00:20:17.695 ] 00:20:17.695 }, 00:20:17.695 { 00:20:17.695 "name": "nvmf_tgt_poll_group_002", 00:20:17.695 "admin_qpairs": 0, 00:20:17.695 "io_qpairs": 1, 00:20:17.695 "current_admin_qpairs": 0, 00:20:17.695 "current_io_qpairs": 1, 00:20:17.695 "pending_bdev_io": 0, 00:20:17.695 "completed_nvme_io": 19401, 00:20:17.695 "transports": [ 00:20:17.695 { 00:20:17.695 "trtype": "TCP" 00:20:17.695 } 00:20:17.695 ] 00:20:17.695 }, 00:20:17.695 { 00:20:17.695 "name": "nvmf_tgt_poll_group_003", 00:20:17.695 "admin_qpairs": 0, 00:20:17.695 "io_qpairs": 1, 00:20:17.695 "current_admin_qpairs": 0, 00:20:17.695 "current_io_qpairs": 1, 00:20:17.695 "pending_bdev_io": 0, 00:20:17.695 "completed_nvme_io": 18677, 00:20:17.695 "transports": [ 00:20:17.695 { 00:20:17.695 "trtype": "TCP" 00:20:17.695 } 00:20:17.695 ] 00:20:17.695 } 00:20:17.695 ] 00:20:17.695 }' 00:20:17.695 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:17.695 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:17.695 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:17.695 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:17.956 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2099780 00:20:26.081 Initializing NVMe Controllers 00:20:26.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:26.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:26.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:26.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:26.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:26.082 Initialization complete. Launching workers. 00:20:26.082 ======================================================== 00:20:26.082 Latency(us) 00:20:26.082 Device Information : IOPS MiB/s Average min max 00:20:26.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10108.33 39.49 6351.24 1433.90 47232.77 00:20:26.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11038.43 43.12 5798.62 1608.03 12144.27 00:20:26.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10798.23 42.18 5927.11 1604.57 13205.04 00:20:26.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9950.43 38.87 6431.88 1575.29 13417.71 00:20:26.082 ======================================================== 00:20:26.082 Total : 41895.42 163.65 6115.47 1433.90 47232.77 00:20:26.082 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:26.082 rmmod nvme_tcp 00:20:26.082 rmmod nvme_fabrics 00:20:26.082 rmmod nvme_keyring 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2099656 ']' 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2099656 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2099656 ']' 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2099656 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2099656 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2099656' 00:20:26.082 killing process with pid 2099656 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2099656 00:20:26.082 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2099656 00:20:26.342 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:26.342 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:26.342 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:26.342 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.342 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:26.342 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.342 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.342 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.251 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:28.251 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:28.251 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:29.632 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:31.014 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:36.299 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:36.299 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:36.299 Found net devices under 0000:86:00.0: cvl_0_0 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:36.299 Found net devices under 0000:86:00.1: cvl_0_1 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:36.299 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:36.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:20:36.300 00:20:36.300 --- 10.0.0.2 ping statistics --- 00:20:36.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.300 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:20:36.300 00:20:36.300 --- 10.0.0.1 ping statistics --- 00:20:36.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.300 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:36.300 net.core.busy_poll = 1 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:36.300 net.core.busy_read = 1 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:36.300 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:36.560 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:36.560 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:36.560 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:36.560 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:36.560 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:36.560 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:36.560 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.560 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2103559 00:20:36.560 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2103559 00:20:36.560 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2103559 ']' 00:20:36.560 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:36.560 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.560 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:36.560 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.560 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:36.560 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.560 [2024-07-24 19:56:28.098332] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:20:36.560 [2024-07-24 19:56:28.098375] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.560 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.560 [2024-07-24 19:56:28.155634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.821 [2024-07-24 19:56:28.236232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.821 [2024-07-24 19:56:28.236270] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.821 [2024-07-24 19:56:28.236277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.821 [2024-07-24 19:56:28.236284] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.821 [2024-07-24 19:56:28.236289] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.821 [2024-07-24 19:56:28.236334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.821 [2024-07-24 19:56:28.236351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.821 [2024-07-24 19:56:28.236441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.821 [2024-07-24 19:56:28.236442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.391 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:37.391 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:20:37.391 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:37.391 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:37.391 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.391 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.391 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:37.391 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:37.391 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:37.391 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.391 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.391 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.651 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:37.651 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:37.651 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.651 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.651 [2024-07-24 19:56:29.102024] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.651 Malloc1 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.651 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.652 [2024-07-24 19:56:29.149764] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.652 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.652 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2103809 00:20:37.652 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:37.652 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:37.652 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.190 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:40.190 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.190 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.190 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.190 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:40.190 "tick_rate": 2300000000, 00:20:40.190 "poll_groups": [ 00:20:40.190 { 00:20:40.190 "name": "nvmf_tgt_poll_group_000", 00:20:40.190 "admin_qpairs": 1, 00:20:40.190 "io_qpairs": 2, 00:20:40.190 "current_admin_qpairs": 1, 00:20:40.190 "current_io_qpairs": 2, 00:20:40.190 "pending_bdev_io": 0, 00:20:40.190 "completed_nvme_io": 27325, 00:20:40.190 "transports": [ 00:20:40.190 { 00:20:40.190 "trtype": "TCP" 00:20:40.190 } 00:20:40.190 ] 00:20:40.190 }, 00:20:40.190 { 00:20:40.190 "name": "nvmf_tgt_poll_group_001", 00:20:40.190 "admin_qpairs": 0, 00:20:40.190 "io_qpairs": 2, 00:20:40.190 "current_admin_qpairs": 0, 00:20:40.190 "current_io_qpairs": 2, 00:20:40.190 "pending_bdev_io": 0, 00:20:40.190 "completed_nvme_io": 28342, 00:20:40.190 "transports": [ 00:20:40.190 { 00:20:40.190 "trtype": "TCP" 00:20:40.190 } 00:20:40.190 ] 00:20:40.190 }, 00:20:40.190 { 00:20:40.190 "name": "nvmf_tgt_poll_group_002", 00:20:40.190 "admin_qpairs": 0, 00:20:40.190 "io_qpairs": 0, 00:20:40.190 "current_admin_qpairs": 0, 00:20:40.190 "current_io_qpairs": 0, 00:20:40.190 "pending_bdev_io": 0, 00:20:40.190 "completed_nvme_io": 0, 00:20:40.190 "transports": [ 00:20:40.190 { 00:20:40.190 "trtype": "TCP" 00:20:40.190 } 00:20:40.190 ] 00:20:40.190 }, 00:20:40.190 { 00:20:40.190 "name": "nvmf_tgt_poll_group_003", 00:20:40.190 "admin_qpairs": 0, 00:20:40.190 "io_qpairs": 0, 00:20:40.190 "current_admin_qpairs": 0, 00:20:40.190 "current_io_qpairs": 0, 00:20:40.190 "pending_bdev_io": 0, 00:20:40.190 "completed_nvme_io": 0, 00:20:40.190 "transports": [ 00:20:40.190 { 00:20:40.190 "trtype": "TCP" 00:20:40.190 } 00:20:40.190 ] 00:20:40.190 } 00:20:40.190 ] 00:20:40.190 }' 00:20:40.190 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:40.190 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:40.190 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:40.190 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:40.190 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2103809 00:20:48.321 Initializing NVMe Controllers 00:20:48.321 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:48.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:48.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:48.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:48.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:48.321 Initialization complete. Launching workers. 00:20:48.321 ======================================================== 00:20:48.321 Latency(us) 00:20:48.321 Device Information : IOPS MiB/s Average min max 00:20:48.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6948.92 27.14 9212.22 1612.52 55462.64 00:20:48.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7620.90 29.77 8398.87 1799.83 53225.73 00:20:48.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8149.79 31.84 7855.26 1665.22 54513.05 00:20:48.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6691.23 26.14 9566.87 1693.61 54245.75 00:20:48.321 ======================================================== 00:20:48.321 Total : 29410.84 114.89 8706.14 1612.52 55462.64 00:20:48.321 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:48.321 rmmod nvme_tcp 00:20:48.321 rmmod nvme_fabrics 00:20:48.321 rmmod nvme_keyring 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2103559 ']' 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2103559 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2103559 ']' 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2103559 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2103559 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2103559' 00:20:48.321 killing process with pid 2103559 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2103559 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2103559 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.321 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.230 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:50.230 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:50.230 00:20:50.230 real 0m49.193s 00:20:50.230 user 2m49.192s 00:20:50.230 sys 0m9.775s 00:20:50.230 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:50.230 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.230 ************************************ 00:20:50.230 END TEST nvmf_perf_adq 00:20:50.230 ************************************ 00:20:50.231 19:56:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:50.231 19:56:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:50.231 19:56:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:50.231 19:56:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:50.231 ************************************ 00:20:50.231 START TEST nvmf_shutdown 00:20:50.231 ************************************ 00:20:50.231 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:50.492 * Looking for test storage... 00:20:50.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:50.492 ************************************ 00:20:50.492 START TEST nvmf_shutdown_tc1 00:20:50.492 ************************************ 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:50.492 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:50.493 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:55.778 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:55.778 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:55.778 Found net devices under 0000:86:00.0: cvl_0_0 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:55.778 Found net devices under 0000:86:00.1: cvl_0_1 00:20:55.778 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:55.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:20:55.779 00:20:55.779 --- 10.0.0.2 ping statistics --- 00:20:55.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.779 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:20:55.779 00:20:55.779 --- 10.0.0.1 ping statistics --- 00:20:55.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.779 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2108946 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2108946 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2108946 ']' 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:55.779 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:55.779 [2024-07-24 19:56:47.370154] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:20:55.779 [2024-07-24 19:56:47.370203] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.039 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.039 [2024-07-24 19:56:47.428297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.039 [2024-07-24 19:56:47.509433] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.039 [2024-07-24 19:56:47.509468] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.039 [2024-07-24 19:56:47.509475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.039 [2024-07-24 19:56:47.509484] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.039 [2024-07-24 19:56:47.509489] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.039 [2024-07-24 19:56:47.509584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.039 [2024-07-24 19:56:47.509688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:56.039 [2024-07-24 19:56:47.509793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.039 [2024-07-24 19:56:47.509795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:56.607 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:56.607 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:20:56.607 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:56.607 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:56.607 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:56.866 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.866 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:56.867 [2024-07-24 19:56:48.226498] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.867 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:56.867 Malloc1 00:20:56.867 [2024-07-24 19:56:48.322146] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.867 Malloc2 00:20:56.867 Malloc3 00:20:56.867 Malloc4 00:20:57.126 Malloc5 00:20:57.126 Malloc6 00:20:57.126 Malloc7 00:20:57.126 Malloc8 00:20:57.126 Malloc9 00:20:57.126 Malloc10 00:20:57.126 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.126 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:57.126 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:57.126 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2109229 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2109229 /var/tmp/bdevperf.sock 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2109229 ']' 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:57.387 { 00:20:57.387 "params": { 00:20:57.387 "name": "Nvme$subsystem", 00:20:57.387 "trtype": "$TEST_TRANSPORT", 00:20:57.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.387 "adrfam": "ipv4", 00:20:57.387 "trsvcid": "$NVMF_PORT", 00:20:57.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.387 "hdgst": ${hdgst:-false}, 00:20:57.387 "ddgst": ${ddgst:-false} 00:20:57.387 }, 00:20:57.387 "method": "bdev_nvme_attach_controller" 00:20:57.387 } 00:20:57.387 EOF 00:20:57.387 )") 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:57.387 { 00:20:57.387 "params": { 00:20:57.387 "name": "Nvme$subsystem", 00:20:57.387 "trtype": "$TEST_TRANSPORT", 00:20:57.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.387 "adrfam": "ipv4", 00:20:57.387 "trsvcid": "$NVMF_PORT", 00:20:57.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.387 "hdgst": ${hdgst:-false}, 00:20:57.387 "ddgst": ${ddgst:-false} 00:20:57.387 }, 00:20:57.387 "method": "bdev_nvme_attach_controller" 00:20:57.387 } 00:20:57.387 EOF 00:20:57.387 )") 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:57.387 { 00:20:57.387 "params": { 00:20:57.387 "name": "Nvme$subsystem", 00:20:57.387 "trtype": "$TEST_TRANSPORT", 00:20:57.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.387 "adrfam": "ipv4", 00:20:57.387 "trsvcid": "$NVMF_PORT", 00:20:57.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.387 "hdgst": ${hdgst:-false}, 00:20:57.387 "ddgst": ${ddgst:-false} 00:20:57.387 }, 00:20:57.387 "method": "bdev_nvme_attach_controller" 00:20:57.387 } 00:20:57.387 EOF 00:20:57.387 )") 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:57.387 { 00:20:57.387 "params": { 00:20:57.387 "name": "Nvme$subsystem", 00:20:57.387 "trtype": "$TEST_TRANSPORT", 00:20:57.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.387 "adrfam": "ipv4", 00:20:57.387 "trsvcid": "$NVMF_PORT", 00:20:57.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.387 "hdgst": ${hdgst:-false}, 00:20:57.387 "ddgst": ${ddgst:-false} 00:20:57.387 }, 00:20:57.387 "method": "bdev_nvme_attach_controller" 00:20:57.387 } 00:20:57.387 EOF 00:20:57.387 )") 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:57.387 { 00:20:57.387 "params": { 00:20:57.387 "name": "Nvme$subsystem", 00:20:57.387 "trtype": "$TEST_TRANSPORT", 00:20:57.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.387 "adrfam": "ipv4", 00:20:57.387 "trsvcid": "$NVMF_PORT", 00:20:57.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.387 "hdgst": ${hdgst:-false}, 00:20:57.387 "ddgst": ${ddgst:-false} 00:20:57.387 }, 00:20:57.387 "method": "bdev_nvme_attach_controller" 00:20:57.387 } 00:20:57.387 EOF 00:20:57.387 )") 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:57.387 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:57.387 { 00:20:57.387 "params": { 00:20:57.387 "name": "Nvme$subsystem", 00:20:57.387 "trtype": "$TEST_TRANSPORT", 00:20:57.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.387 "adrfam": "ipv4", 00:20:57.387 "trsvcid": "$NVMF_PORT", 00:20:57.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.387 "hdgst": ${hdgst:-false}, 00:20:57.388 "ddgst": ${ddgst:-false} 00:20:57.388 }, 00:20:57.388 "method": "bdev_nvme_attach_controller" 00:20:57.388 } 00:20:57.388 EOF 00:20:57.388 )") 00:20:57.388 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:57.388 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:57.388 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:57.388 { 00:20:57.388 "params": { 00:20:57.388 "name": "Nvme$subsystem", 00:20:57.388 "trtype": "$TEST_TRANSPORT", 00:20:57.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.388 "adrfam": "ipv4", 00:20:57.388 "trsvcid": "$NVMF_PORT", 00:20:57.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.388 "hdgst": ${hdgst:-false}, 00:20:57.388 "ddgst": ${ddgst:-false} 00:20:57.388 }, 00:20:57.388 "method": "bdev_nvme_attach_controller" 00:20:57.388 } 00:20:57.388 EOF 00:20:57.388 )") 00:20:57.388 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:57.388 [2024-07-24 19:56:48.793708] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:20:57.388 [2024-07-24 19:56:48.793757] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:57.388 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:57.388 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:57.388 { 00:20:57.388 "params": { 00:20:57.388 "name": "Nvme$subsystem", 00:20:57.388 "trtype": "$TEST_TRANSPORT", 00:20:57.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.388 "adrfam": "ipv4", 00:20:57.388 "trsvcid": "$NVMF_PORT", 00:20:57.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.388 "hdgst": ${hdgst:-false}, 00:20:57.388 "ddgst": ${ddgst:-false} 00:20:57.388 }, 00:20:57.388 "method": "bdev_nvme_attach_controller" 00:20:57.388 } 00:20:57.388 EOF 00:20:57.388 )") 00:20:57.388 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:57.388 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:57.388 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:57.388 { 00:20:57.388 "params": { 00:20:57.388 "name": "Nvme$subsystem", 00:20:57.388 "trtype": "$TEST_TRANSPORT", 00:20:57.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.388 "adrfam": "ipv4", 00:20:57.388 "trsvcid": "$NVMF_PORT", 00:20:57.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.388 "hdgst": ${hdgst:-false}, 00:20:57.388 "ddgst": ${ddgst:-false} 00:20:57.388 }, 00:20:57.388 "method": "bdev_nvme_attach_controller" 00:20:57.388 } 00:20:57.388 EOF 00:20:57.388 )") 00:20:57.388 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:57.388 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:57.388 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:57.388 { 00:20:57.388 "params": { 00:20:57.388 "name": "Nvme$subsystem", 00:20:57.388 "trtype": "$TEST_TRANSPORT", 00:20:57.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.388 "adrfam": "ipv4", 00:20:57.388 "trsvcid": "$NVMF_PORT", 00:20:57.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.388 "hdgst": ${hdgst:-false}, 00:20:57.388 "ddgst": ${ddgst:-false} 00:20:57.388 }, 00:20:57.388 "method": "bdev_nvme_attach_controller" 00:20:57.388 } 00:20:57.388 EOF 00:20:57.388 )") 00:20:57.388 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:57.388 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:57.388 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.388 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:57.388 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:57.388 "params": { 00:20:57.388 "name": "Nvme1", 00:20:57.388 "trtype": "tcp", 00:20:57.388 "traddr": "10.0.0.2", 00:20:57.388 "adrfam": "ipv4", 00:20:57.388 "trsvcid": "4420", 00:20:57.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:57.388 "hdgst": false, 00:20:57.388 "ddgst": false 00:20:57.388 }, 00:20:57.388 "method": "bdev_nvme_attach_controller" 00:20:57.388 },{ 00:20:57.388 "params": { 00:20:57.388 "name": "Nvme2", 00:20:57.388 "trtype": "tcp", 00:20:57.388 "traddr": "10.0.0.2", 00:20:57.388 "adrfam": "ipv4", 00:20:57.388 "trsvcid": "4420", 00:20:57.388 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:57.388 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:57.388 "hdgst": false, 00:20:57.388 "ddgst": false 00:20:57.388 }, 00:20:57.388 "method": "bdev_nvme_attach_controller" 00:20:57.388 },{ 00:20:57.388 "params": { 00:20:57.388 "name": "Nvme3", 00:20:57.388 "trtype": "tcp", 00:20:57.388 "traddr": "10.0.0.2", 00:20:57.388 "adrfam": "ipv4", 00:20:57.388 "trsvcid": "4420", 00:20:57.388 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:57.388 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:57.388 "hdgst": false, 00:20:57.388 "ddgst": false 00:20:57.388 }, 00:20:57.388 "method": "bdev_nvme_attach_controller" 00:20:57.388 },{ 00:20:57.388 "params": { 00:20:57.388 "name": "Nvme4", 00:20:57.388 "trtype": "tcp", 00:20:57.388 "traddr": "10.0.0.2", 00:20:57.388 "adrfam": "ipv4", 00:20:57.388 "trsvcid": "4420", 00:20:57.388 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:57.388 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:57.388 "hdgst": false, 00:20:57.388 "ddgst": false 00:20:57.388 }, 00:20:57.388 "method": "bdev_nvme_attach_controller" 00:20:57.388 },{ 00:20:57.388 "params": { 00:20:57.388 "name": "Nvme5", 00:20:57.388 "trtype": "tcp", 00:20:57.388 "traddr": "10.0.0.2", 00:20:57.388 "adrfam": "ipv4", 00:20:57.388 "trsvcid": "4420", 00:20:57.388 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:57.388 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:57.388 "hdgst": false, 00:20:57.388 "ddgst": false 00:20:57.388 }, 00:20:57.388 "method": "bdev_nvme_attach_controller" 00:20:57.388 },{ 00:20:57.388 "params": { 00:20:57.388 "name": "Nvme6", 00:20:57.388 "trtype": "tcp", 00:20:57.388 "traddr": "10.0.0.2", 00:20:57.388 "adrfam": "ipv4", 00:20:57.388 "trsvcid": "4420", 00:20:57.388 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:57.388 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:57.388 "hdgst": false, 00:20:57.388 "ddgst": false 00:20:57.388 }, 00:20:57.388 "method": "bdev_nvme_attach_controller" 00:20:57.388 },{ 00:20:57.388 "params": { 00:20:57.388 "name": "Nvme7", 00:20:57.388 "trtype": "tcp", 00:20:57.388 "traddr": "10.0.0.2", 00:20:57.388 "adrfam": "ipv4", 00:20:57.388 "trsvcid": "4420", 00:20:57.388 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:57.388 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:57.388 "hdgst": false, 00:20:57.388 "ddgst": false 00:20:57.388 }, 00:20:57.388 "method": "bdev_nvme_attach_controller" 00:20:57.388 },{ 00:20:57.388 "params": { 00:20:57.388 "name": "Nvme8", 00:20:57.388 "trtype": "tcp", 00:20:57.388 "traddr": "10.0.0.2", 00:20:57.388 "adrfam": "ipv4", 00:20:57.388 "trsvcid": "4420", 00:20:57.388 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:57.388 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:57.388 "hdgst": false, 00:20:57.388 "ddgst": false 00:20:57.388 }, 00:20:57.388 "method": "bdev_nvme_attach_controller" 00:20:57.388 },{ 00:20:57.388 "params": { 00:20:57.388 "name": "Nvme9", 00:20:57.388 "trtype": "tcp", 00:20:57.388 "traddr": "10.0.0.2", 00:20:57.388 "adrfam": "ipv4", 00:20:57.388 "trsvcid": "4420", 00:20:57.388 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:57.388 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:57.388 "hdgst": false, 00:20:57.388 "ddgst": false 00:20:57.388 }, 00:20:57.388 "method": "bdev_nvme_attach_controller" 00:20:57.388 },{ 00:20:57.388 "params": { 00:20:57.388 "name": "Nvme10", 00:20:57.389 "trtype": "tcp", 00:20:57.389 "traddr": "10.0.0.2", 00:20:57.389 "adrfam": "ipv4", 00:20:57.389 "trsvcid": "4420", 00:20:57.389 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:57.389 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:57.389 "hdgst": false, 00:20:57.389 "ddgst": false 00:20:57.389 }, 00:20:57.389 "method": "bdev_nvme_attach_controller" 00:20:57.389 }' 00:20:57.389 [2024-07-24 19:56:48.851142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.389 [2024-07-24 19:56:48.925183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.768 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:58.768 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:20:58.768 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:58.768 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.768 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:58.768 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.768 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2109229 00:20:58.768 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:58.768 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:59.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2109229 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:59.709 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2108946 00:20:59.709 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:59.709 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:59.709 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:59.709 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:59.709 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:59.709 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:59.709 { 00:20:59.709 "params": { 00:20:59.709 "name": "Nvme$subsystem", 00:20:59.709 "trtype": "$TEST_TRANSPORT", 00:20:59.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.709 "adrfam": "ipv4", 00:20:59.709 "trsvcid": "$NVMF_PORT", 00:20:59.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.709 "hdgst": ${hdgst:-false}, 00:20:59.709 "ddgst": ${ddgst:-false} 00:20:59.709 }, 00:20:59.709 "method": "bdev_nvme_attach_controller" 00:20:59.709 } 00:20:59.709 EOF 00:20:59.709 )") 00:20:59.709 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:59.709 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:59.709 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:59.709 { 00:20:59.709 "params": { 00:20:59.709 "name": "Nvme$subsystem", 00:20:59.709 "trtype": "$TEST_TRANSPORT", 00:20:59.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.709 "adrfam": "ipv4", 00:20:59.709 "trsvcid": "$NVMF_PORT", 00:20:59.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.709 "hdgst": ${hdgst:-false}, 00:20:59.709 "ddgst": ${ddgst:-false} 00:20:59.709 }, 00:20:59.709 "method": "bdev_nvme_attach_controller" 00:20:59.709 } 00:20:59.709 EOF 00:20:59.709 )") 00:20:59.709 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.005 { 00:21:00.005 "params": { 00:21:00.005 "name": "Nvme$subsystem", 00:21:00.005 "trtype": "$TEST_TRANSPORT", 00:21:00.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.005 "adrfam": "ipv4", 00:21:00.005 "trsvcid": "$NVMF_PORT", 00:21:00.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.005 "hdgst": ${hdgst:-false}, 00:21:00.005 "ddgst": ${ddgst:-false} 00:21:00.005 }, 00:21:00.005 "method": "bdev_nvme_attach_controller" 00:21:00.005 } 00:21:00.005 EOF 00:21:00.005 )") 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.005 { 00:21:00.005 "params": { 00:21:00.005 "name": "Nvme$subsystem", 00:21:00.005 "trtype": "$TEST_TRANSPORT", 00:21:00.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.005 "adrfam": "ipv4", 00:21:00.005 "trsvcid": "$NVMF_PORT", 00:21:00.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.005 "hdgst": ${hdgst:-false}, 00:21:00.005 "ddgst": ${ddgst:-false} 00:21:00.005 }, 00:21:00.005 "method": "bdev_nvme_attach_controller" 00:21:00.005 } 00:21:00.005 EOF 00:21:00.005 )") 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.005 { 00:21:00.005 "params": { 00:21:00.005 "name": "Nvme$subsystem", 00:21:00.005 "trtype": "$TEST_TRANSPORT", 00:21:00.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.005 "adrfam": "ipv4", 00:21:00.005 "trsvcid": "$NVMF_PORT", 00:21:00.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.005 "hdgst": ${hdgst:-false}, 00:21:00.005 "ddgst": ${ddgst:-false} 00:21:00.005 }, 00:21:00.005 "method": "bdev_nvme_attach_controller" 00:21:00.005 } 00:21:00.005 EOF 00:21:00.005 )") 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.005 { 00:21:00.005 "params": { 00:21:00.005 "name": "Nvme$subsystem", 00:21:00.005 "trtype": "$TEST_TRANSPORT", 00:21:00.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.005 "adrfam": "ipv4", 00:21:00.005 "trsvcid": "$NVMF_PORT", 00:21:00.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.005 "hdgst": ${hdgst:-false}, 00:21:00.005 "ddgst": ${ddgst:-false} 00:21:00.005 }, 00:21:00.005 "method": "bdev_nvme_attach_controller" 00:21:00.005 } 00:21:00.005 EOF 00:21:00.005 )") 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:00.005 [2024-07-24 19:56:51.333784] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:21:00.005 [2024-07-24 19:56:51.333833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2109574 ] 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.005 { 00:21:00.005 "params": { 00:21:00.005 "name": "Nvme$subsystem", 00:21:00.005 "trtype": "$TEST_TRANSPORT", 00:21:00.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.005 "adrfam": "ipv4", 00:21:00.005 "trsvcid": "$NVMF_PORT", 00:21:00.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.005 "hdgst": ${hdgst:-false}, 00:21:00.005 "ddgst": ${ddgst:-false} 00:21:00.005 }, 00:21:00.005 "method": "bdev_nvme_attach_controller" 00:21:00.005 } 00:21:00.005 EOF 00:21:00.005 )") 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.005 { 00:21:00.005 "params": { 00:21:00.005 "name": "Nvme$subsystem", 00:21:00.005 "trtype": "$TEST_TRANSPORT", 00:21:00.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.005 "adrfam": "ipv4", 00:21:00.005 "trsvcid": "$NVMF_PORT", 00:21:00.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.005 "hdgst": ${hdgst:-false}, 00:21:00.005 "ddgst": ${ddgst:-false} 00:21:00.005 }, 00:21:00.005 "method": "bdev_nvme_attach_controller" 00:21:00.005 } 00:21:00.005 EOF 00:21:00.005 )") 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.005 { 00:21:00.005 "params": { 00:21:00.005 "name": "Nvme$subsystem", 00:21:00.005 "trtype": "$TEST_TRANSPORT", 00:21:00.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.005 "adrfam": "ipv4", 00:21:00.005 "trsvcid": "$NVMF_PORT", 00:21:00.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.005 "hdgst": ${hdgst:-false}, 00:21:00.005 "ddgst": ${ddgst:-false} 00:21:00.005 }, 00:21:00.005 "method": "bdev_nvme_attach_controller" 00:21:00.005 } 00:21:00.005 EOF 00:21:00.005 )") 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:00.005 { 00:21:00.005 "params": { 00:21:00.005 "name": "Nvme$subsystem", 00:21:00.005 "trtype": "$TEST_TRANSPORT", 00:21:00.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.005 "adrfam": "ipv4", 00:21:00.005 "trsvcid": "$NVMF_PORT", 00:21:00.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.005 "hdgst": ${hdgst:-false}, 00:21:00.005 "ddgst": ${ddgst:-false} 00:21:00.005 }, 00:21:00.005 "method": "bdev_nvme_attach_controller" 00:21:00.005 } 00:21:00.005 EOF 00:21:00.005 )") 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:00.005 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.005 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:00.006 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:00.006 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:00.006 "params": { 00:21:00.006 "name": "Nvme1", 00:21:00.006 "trtype": "tcp", 00:21:00.006 "traddr": "10.0.0.2", 00:21:00.006 "adrfam": "ipv4", 00:21:00.006 "trsvcid": "4420", 00:21:00.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.006 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.006 "hdgst": false, 00:21:00.006 "ddgst": false 00:21:00.006 }, 00:21:00.006 "method": "bdev_nvme_attach_controller" 00:21:00.006 },{ 00:21:00.006 "params": { 00:21:00.006 "name": "Nvme2", 00:21:00.006 "trtype": "tcp", 00:21:00.006 "traddr": "10.0.0.2", 00:21:00.006 "adrfam": "ipv4", 00:21:00.006 "trsvcid": "4420", 00:21:00.006 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:00.006 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:00.006 "hdgst": false, 00:21:00.006 "ddgst": false 00:21:00.006 }, 00:21:00.006 "method": "bdev_nvme_attach_controller" 00:21:00.006 },{ 00:21:00.006 "params": { 00:21:00.006 "name": "Nvme3", 00:21:00.006 "trtype": "tcp", 00:21:00.006 "traddr": "10.0.0.2", 00:21:00.006 "adrfam": "ipv4", 00:21:00.006 "trsvcid": "4420", 00:21:00.006 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:00.006 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:00.006 "hdgst": false, 00:21:00.006 "ddgst": false 00:21:00.006 }, 00:21:00.006 "method": "bdev_nvme_attach_controller" 00:21:00.006 },{ 00:21:00.006 "params": { 00:21:00.006 "name": "Nvme4", 00:21:00.006 "trtype": "tcp", 00:21:00.006 "traddr": "10.0.0.2", 00:21:00.006 "adrfam": "ipv4", 00:21:00.006 "trsvcid": "4420", 00:21:00.006 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:00.006 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:00.006 "hdgst": false, 00:21:00.006 "ddgst": false 00:21:00.006 }, 00:21:00.006 "method": "bdev_nvme_attach_controller" 00:21:00.006 },{ 00:21:00.006 "params": { 00:21:00.006 "name": "Nvme5", 00:21:00.006 "trtype": "tcp", 00:21:00.006 "traddr": "10.0.0.2", 00:21:00.006 "adrfam": "ipv4", 00:21:00.006 "trsvcid": "4420", 00:21:00.006 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:00.006 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:00.006 "hdgst": false, 00:21:00.006 "ddgst": false 00:21:00.006 }, 00:21:00.006 "method": "bdev_nvme_attach_controller" 00:21:00.006 },{ 00:21:00.006 "params": { 00:21:00.006 "name": "Nvme6", 00:21:00.006 "trtype": "tcp", 00:21:00.006 "traddr": "10.0.0.2", 00:21:00.006 "adrfam": "ipv4", 00:21:00.006 "trsvcid": "4420", 00:21:00.006 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:00.006 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:00.006 "hdgst": false, 00:21:00.006 "ddgst": false 00:21:00.006 }, 00:21:00.006 "method": "bdev_nvme_attach_controller" 00:21:00.006 },{ 00:21:00.006 "params": { 00:21:00.006 "name": "Nvme7", 00:21:00.006 "trtype": "tcp", 00:21:00.006 "traddr": "10.0.0.2", 00:21:00.006 "adrfam": "ipv4", 00:21:00.006 "trsvcid": "4420", 00:21:00.006 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:00.006 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:00.006 "hdgst": false, 00:21:00.006 "ddgst": false 00:21:00.006 }, 00:21:00.006 "method": "bdev_nvme_attach_controller" 00:21:00.006 },{ 00:21:00.006 "params": { 00:21:00.006 "name": "Nvme8", 00:21:00.006 "trtype": "tcp", 00:21:00.006 "traddr": "10.0.0.2", 00:21:00.006 "adrfam": "ipv4", 00:21:00.006 "trsvcid": "4420", 00:21:00.006 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:00.006 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:00.006 "hdgst": false, 00:21:00.006 "ddgst": false 00:21:00.006 }, 00:21:00.006 "method": "bdev_nvme_attach_controller" 00:21:00.006 },{ 00:21:00.006 "params": { 00:21:00.006 "name": "Nvme9", 00:21:00.006 "trtype": "tcp", 00:21:00.006 "traddr": "10.0.0.2", 00:21:00.006 "adrfam": "ipv4", 00:21:00.006 "trsvcid": "4420", 00:21:00.006 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:00.006 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:00.006 "hdgst": false, 00:21:00.006 "ddgst": false 00:21:00.006 }, 00:21:00.006 "method": "bdev_nvme_attach_controller" 00:21:00.006 },{ 00:21:00.006 "params": { 00:21:00.006 "name": "Nvme10", 00:21:00.006 "trtype": "tcp", 00:21:00.006 "traddr": "10.0.0.2", 00:21:00.006 "adrfam": "ipv4", 00:21:00.006 "trsvcid": "4420", 00:21:00.006 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:00.006 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:00.006 "hdgst": false, 00:21:00.006 "ddgst": false 00:21:00.006 }, 00:21:00.006 "method": "bdev_nvme_attach_controller" 00:21:00.006 }' 00:21:00.006 [2024-07-24 19:56:51.391253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.006 [2024-07-24 19:56:51.465764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.391 Running I/O for 1 seconds... 00:21:02.770 00:21:02.770 Latency(us) 00:21:02.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.770 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.771 Verification LBA range: start 0x0 length 0x400 00:21:02.771 Nvme1n1 : 1.11 289.53 18.10 0.00 0.00 218836.28 21427.42 206067.98 00:21:02.771 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.771 Verification LBA range: start 0x0 length 0x400 00:21:02.771 Nvme2n1 : 1.16 165.82 10.36 0.00 0.00 377130.00 37384.01 341015.15 00:21:02.771 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.771 Verification LBA range: start 0x0 length 0x400 00:21:02.771 Nvme3n1 : 1.12 286.11 17.88 0.00 0.00 215228.77 20743.57 208803.39 00:21:02.771 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.771 Verification LBA range: start 0x0 length 0x400 00:21:02.771 Nvme4n1 : 1.07 298.52 18.66 0.00 0.00 202811.30 22111.28 209715.20 00:21:02.771 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.771 Verification LBA range: start 0x0 length 0x400 00:21:02.771 Nvme5n1 : 1.19 271.82 16.99 0.00 0.00 213244.58 9175.04 225215.89 00:21:02.771 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.771 Verification LBA range: start 0x0 length 0x400 00:21:02.771 Nvme6n1 : 1.18 217.86 13.62 0.00 0.00 271542.98 23706.94 306366.55 00:21:02.771 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.771 Verification LBA range: start 0x0 length 0x400 00:21:02.771 Nvme7n1 : 1.17 328.22 20.51 0.00 0.00 177385.59 17552.25 202420.76 00:21:02.771 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.771 Verification LBA range: start 0x0 length 0x400 00:21:02.771 Nvme8n1 : 1.17 218.33 13.65 0.00 0.00 262827.63 22681.15 286306.84 00:21:02.771 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.771 Verification LBA range: start 0x0 length 0x400 00:21:02.771 Nvme9n1 : 1.16 165.35 10.33 0.00 0.00 341572.56 22453.20 373840.14 00:21:02.771 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.771 Verification LBA range: start 0x0 length 0x400 00:21:02.771 Nvme10n1 : 1.20 321.09 20.07 0.00 0.00 173933.15 13962.02 203332.56 00:21:02.771 =================================================================================================================== 00:21:02.771 Total : 2562.65 160.17 0.00 0.00 231542.69 9175.04 373840.14 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:02.771 rmmod nvme_tcp 00:21:02.771 rmmod nvme_fabrics 00:21:02.771 rmmod nvme_keyring 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2108946 ']' 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2108946 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2108946 ']' 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2108946 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2108946 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2108946' 00:21:02.771 killing process with pid 2108946 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2108946 00:21:02.771 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2108946 00:21:03.341 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:03.341 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:03.342 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:03.342 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:03.342 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:03.342 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.342 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.342 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.252 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:05.252 00:21:05.252 real 0m14.789s 00:21:05.252 user 0m33.805s 00:21:05.252 sys 0m5.373s 00:21:05.252 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:05.252 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:05.252 ************************************ 00:21:05.252 END TEST nvmf_shutdown_tc1 00:21:05.252 ************************************ 00:21:05.252 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:05.252 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:05.252 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:05.252 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:05.252 ************************************ 00:21:05.252 START TEST nvmf_shutdown_tc2 00:21:05.252 ************************************ 00:21:05.252 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:21:05.252 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:05.252 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:05.252 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:05.252 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.252 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:05.253 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:05.253 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:05.253 Found net devices under 0000:86:00.0: cvl_0_0 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:05.253 Found net devices under 0000:86:00.1: cvl_0_1 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.253 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.254 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:05.254 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:05.254 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.254 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.514 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.514 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.514 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:05.514 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.514 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.514 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.514 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:05.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:21:05.514 00:21:05.514 --- 10.0.0.2 ping statistics --- 00:21:05.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.514 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:21:05.514 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:21:05.514 00:21:05.514 --- 10.0.0.1 ping statistics --- 00:21:05.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.514 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:21:05.514 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.514 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:05.514 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:05.514 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.514 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:05.514 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:05.514 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.514 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:05.514 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:05.775 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:05.775 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:05.775 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:05.775 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:05.775 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2110685 00:21:05.775 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2110685 00:21:05.775 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:05.775 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2110685 ']' 00:21:05.775 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.775 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:05.775 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.775 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:05.775 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:05.775 [2024-07-24 19:56:57.178175] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:21:05.775 [2024-07-24 19:56:57.178222] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.775 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.775 [2024-07-24 19:56:57.235932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:05.775 [2024-07-24 19:56:57.310581] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.775 [2024-07-24 19:56:57.310622] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.775 [2024-07-24 19:56:57.310628] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.775 [2024-07-24 19:56:57.310634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.775 [2024-07-24 19:56:57.310639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.775 [2024-07-24 19:56:57.310755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.775 [2024-07-24 19:56:57.310852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:05.775 [2024-07-24 19:56:57.310960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.775 [2024-07-24 19:56:57.310961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:06.715 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:06.715 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:06.715 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:06.715 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:06.715 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.715 [2024-07-24 19:56:58.026307] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.715 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.715 Malloc1 00:21:06.715 [2024-07-24 19:56:58.117942] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.715 Malloc2 00:21:06.715 Malloc3 00:21:06.715 Malloc4 00:21:06.715 Malloc5 00:21:06.715 Malloc6 00:21:06.976 Malloc7 00:21:06.976 Malloc8 00:21:06.976 Malloc9 00:21:06.976 Malloc10 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2110988 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2110988 /var/tmp/bdevperf.sock 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2110988 ']' 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.976 { 00:21:06.976 "params": { 00:21:06.976 "name": "Nvme$subsystem", 00:21:06.976 "trtype": "$TEST_TRANSPORT", 00:21:06.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.976 "adrfam": "ipv4", 00:21:06.976 "trsvcid": "$NVMF_PORT", 00:21:06.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.976 "hdgst": ${hdgst:-false}, 00:21:06.976 "ddgst": ${ddgst:-false} 00:21:06.976 }, 00:21:06.976 "method": "bdev_nvme_attach_controller" 00:21:06.976 } 00:21:06.976 EOF 00:21:06.976 )") 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.976 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.976 { 00:21:06.976 "params": { 00:21:06.976 "name": "Nvme$subsystem", 00:21:06.976 "trtype": "$TEST_TRANSPORT", 00:21:06.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.976 "adrfam": "ipv4", 00:21:06.976 "trsvcid": "$NVMF_PORT", 00:21:06.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.977 "hdgst": ${hdgst:-false}, 00:21:06.977 "ddgst": ${ddgst:-false} 00:21:06.977 }, 00:21:06.977 "method": "bdev_nvme_attach_controller" 00:21:06.977 } 00:21:06.977 EOF 00:21:06.977 )") 00:21:06.977 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:06.977 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.977 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.977 { 00:21:06.977 "params": { 00:21:06.977 "name": "Nvme$subsystem", 00:21:06.977 "trtype": "$TEST_TRANSPORT", 00:21:06.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.977 "adrfam": "ipv4", 00:21:06.977 "trsvcid": "$NVMF_PORT", 00:21:06.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.977 "hdgst": ${hdgst:-false}, 00:21:06.977 "ddgst": ${ddgst:-false} 00:21:06.977 }, 00:21:06.977 "method": "bdev_nvme_attach_controller" 00:21:06.977 } 00:21:06.977 EOF 00:21:06.977 )") 00:21:06.977 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:07.237 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:07.237 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:07.237 { 00:21:07.237 "params": { 00:21:07.237 "name": "Nvme$subsystem", 00:21:07.237 "trtype": "$TEST_TRANSPORT", 00:21:07.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.237 "adrfam": "ipv4", 00:21:07.237 "trsvcid": "$NVMF_PORT", 00:21:07.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.237 "hdgst": ${hdgst:-false}, 00:21:07.237 "ddgst": ${ddgst:-false} 00:21:07.237 }, 00:21:07.237 "method": "bdev_nvme_attach_controller" 00:21:07.237 } 00:21:07.237 EOF 00:21:07.237 )") 00:21:07.237 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:07.237 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:07.237 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:07.237 { 00:21:07.237 "params": { 00:21:07.237 "name": "Nvme$subsystem", 00:21:07.237 "trtype": "$TEST_TRANSPORT", 00:21:07.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.237 "adrfam": "ipv4", 00:21:07.237 "trsvcid": "$NVMF_PORT", 00:21:07.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.237 "hdgst": ${hdgst:-false}, 00:21:07.237 "ddgst": ${ddgst:-false} 00:21:07.237 }, 00:21:07.237 "method": "bdev_nvme_attach_controller" 00:21:07.237 } 00:21:07.237 EOF 00:21:07.237 )") 00:21:07.237 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:07.237 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:07.237 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:07.237 { 00:21:07.237 "params": { 00:21:07.237 "name": "Nvme$subsystem", 00:21:07.237 "trtype": "$TEST_TRANSPORT", 00:21:07.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.237 "adrfam": "ipv4", 00:21:07.237 "trsvcid": "$NVMF_PORT", 00:21:07.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.237 "hdgst": ${hdgst:-false}, 00:21:07.237 "ddgst": ${ddgst:-false} 00:21:07.237 }, 00:21:07.237 "method": "bdev_nvme_attach_controller" 00:21:07.237 } 00:21:07.237 EOF 00:21:07.237 )") 00:21:07.237 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:07.237 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:07.238 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:07.238 { 00:21:07.238 "params": { 00:21:07.238 "name": "Nvme$subsystem", 00:21:07.238 "trtype": "$TEST_TRANSPORT", 00:21:07.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.238 "adrfam": "ipv4", 00:21:07.238 "trsvcid": "$NVMF_PORT", 00:21:07.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.238 "hdgst": ${hdgst:-false}, 00:21:07.238 "ddgst": ${ddgst:-false} 00:21:07.238 }, 00:21:07.238 "method": "bdev_nvme_attach_controller" 00:21:07.238 } 00:21:07.238 EOF 00:21:07.238 )") 00:21:07.238 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:07.238 [2024-07-24 19:56:58.597134] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:21:07.238 [2024-07-24 19:56:58.597182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2110988 ] 00:21:07.238 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:07.238 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:07.238 { 00:21:07.238 "params": { 00:21:07.238 "name": "Nvme$subsystem", 00:21:07.238 "trtype": "$TEST_TRANSPORT", 00:21:07.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.238 "adrfam": "ipv4", 00:21:07.238 "trsvcid": "$NVMF_PORT", 00:21:07.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.238 "hdgst": ${hdgst:-false}, 00:21:07.238 "ddgst": ${ddgst:-false} 00:21:07.238 }, 00:21:07.238 "method": "bdev_nvme_attach_controller" 00:21:07.238 } 00:21:07.238 EOF 00:21:07.238 )") 00:21:07.238 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:07.238 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:07.238 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:07.238 { 00:21:07.238 "params": { 00:21:07.238 "name": "Nvme$subsystem", 00:21:07.238 "trtype": "$TEST_TRANSPORT", 00:21:07.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.238 "adrfam": "ipv4", 00:21:07.238 "trsvcid": "$NVMF_PORT", 00:21:07.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.238 "hdgst": ${hdgst:-false}, 00:21:07.238 "ddgst": ${ddgst:-false} 00:21:07.238 }, 00:21:07.238 "method": "bdev_nvme_attach_controller" 00:21:07.238 } 00:21:07.238 EOF 00:21:07.238 )") 00:21:07.238 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:07.238 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:07.238 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:07.238 { 00:21:07.238 "params": { 00:21:07.238 "name": "Nvme$subsystem", 00:21:07.238 "trtype": "$TEST_TRANSPORT", 00:21:07.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.238 "adrfam": "ipv4", 00:21:07.238 "trsvcid": "$NVMF_PORT", 00:21:07.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.238 "hdgst": ${hdgst:-false}, 00:21:07.238 "ddgst": ${ddgst:-false} 00:21:07.238 }, 00:21:07.238 "method": "bdev_nvme_attach_controller" 00:21:07.238 } 00:21:07.238 EOF 00:21:07.238 )") 00:21:07.238 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:07.238 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:07.238 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.238 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:07.238 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:07.238 "params": { 00:21:07.238 "name": "Nvme1", 00:21:07.238 "trtype": "tcp", 00:21:07.238 "traddr": "10.0.0.2", 00:21:07.238 "adrfam": "ipv4", 00:21:07.238 "trsvcid": "4420", 00:21:07.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.238 "hdgst": false, 00:21:07.238 "ddgst": false 00:21:07.238 }, 00:21:07.238 "method": "bdev_nvme_attach_controller" 00:21:07.238 },{ 00:21:07.238 "params": { 00:21:07.238 "name": "Nvme2", 00:21:07.238 "trtype": "tcp", 00:21:07.238 "traddr": "10.0.0.2", 00:21:07.238 "adrfam": "ipv4", 00:21:07.238 "trsvcid": "4420", 00:21:07.238 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:07.238 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:07.238 "hdgst": false, 00:21:07.238 "ddgst": false 00:21:07.238 }, 00:21:07.238 "method": "bdev_nvme_attach_controller" 00:21:07.238 },{ 00:21:07.238 "params": { 00:21:07.238 "name": "Nvme3", 00:21:07.238 "trtype": "tcp", 00:21:07.238 "traddr": "10.0.0.2", 00:21:07.238 "adrfam": "ipv4", 00:21:07.238 "trsvcid": "4420", 00:21:07.238 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:07.238 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:07.238 "hdgst": false, 00:21:07.238 "ddgst": false 00:21:07.238 }, 00:21:07.238 "method": "bdev_nvme_attach_controller" 00:21:07.238 },{ 00:21:07.238 "params": { 00:21:07.238 "name": "Nvme4", 00:21:07.238 "trtype": "tcp", 00:21:07.238 "traddr": "10.0.0.2", 00:21:07.238 "adrfam": "ipv4", 00:21:07.238 "trsvcid": "4420", 00:21:07.238 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:07.238 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:07.238 "hdgst": false, 00:21:07.238 "ddgst": false 00:21:07.238 }, 00:21:07.238 "method": "bdev_nvme_attach_controller" 00:21:07.238 },{ 00:21:07.238 "params": { 00:21:07.238 "name": "Nvme5", 00:21:07.238 "trtype": "tcp", 00:21:07.238 "traddr": "10.0.0.2", 00:21:07.238 "adrfam": "ipv4", 00:21:07.238 "trsvcid": "4420", 00:21:07.238 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:07.238 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:07.238 "hdgst": false, 00:21:07.238 "ddgst": false 00:21:07.238 }, 00:21:07.238 "method": "bdev_nvme_attach_controller" 00:21:07.238 },{ 00:21:07.238 "params": { 00:21:07.238 "name": "Nvme6", 00:21:07.238 "trtype": "tcp", 00:21:07.238 "traddr": "10.0.0.2", 00:21:07.238 "adrfam": "ipv4", 00:21:07.238 "trsvcid": "4420", 00:21:07.238 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:07.238 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:07.238 "hdgst": false, 00:21:07.238 "ddgst": false 00:21:07.238 }, 00:21:07.238 "method": "bdev_nvme_attach_controller" 00:21:07.238 },{ 00:21:07.238 "params": { 00:21:07.238 "name": "Nvme7", 00:21:07.238 "trtype": "tcp", 00:21:07.238 "traddr": "10.0.0.2", 00:21:07.238 "adrfam": "ipv4", 00:21:07.238 "trsvcid": "4420", 00:21:07.238 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:07.238 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:07.238 "hdgst": false, 00:21:07.238 "ddgst": false 00:21:07.238 }, 00:21:07.238 "method": "bdev_nvme_attach_controller" 00:21:07.238 },{ 00:21:07.238 "params": { 00:21:07.238 "name": "Nvme8", 00:21:07.238 "trtype": "tcp", 00:21:07.238 "traddr": "10.0.0.2", 00:21:07.238 "adrfam": "ipv4", 00:21:07.238 "trsvcid": "4420", 00:21:07.238 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:07.238 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:07.238 "hdgst": false, 00:21:07.238 "ddgst": false 00:21:07.238 }, 00:21:07.238 "method": "bdev_nvme_attach_controller" 00:21:07.238 },{ 00:21:07.238 "params": { 00:21:07.238 "name": "Nvme9", 00:21:07.238 "trtype": "tcp", 00:21:07.238 "traddr": "10.0.0.2", 00:21:07.238 "adrfam": "ipv4", 00:21:07.238 "trsvcid": "4420", 00:21:07.238 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:07.238 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:07.238 "hdgst": false, 00:21:07.238 "ddgst": false 00:21:07.238 }, 00:21:07.238 "method": "bdev_nvme_attach_controller" 00:21:07.238 },{ 00:21:07.238 "params": { 00:21:07.238 "name": "Nvme10", 00:21:07.238 "trtype": "tcp", 00:21:07.238 "traddr": "10.0.0.2", 00:21:07.238 "adrfam": "ipv4", 00:21:07.238 "trsvcid": "4420", 00:21:07.238 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:07.238 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:07.238 "hdgst": false, 00:21:07.238 "ddgst": false 00:21:07.238 }, 00:21:07.238 "method": "bdev_nvme_attach_controller" 00:21:07.238 }' 00:21:07.238 [2024-07-24 19:56:58.651933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.238 [2024-07-24 19:56:58.725760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.149 Running I/O for 10 seconds... 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2110988 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2110988 ']' 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2110988 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2110988 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2110988' 00:21:09.717 killing process with pid 2110988 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2110988 00:21:09.717 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2110988 00:21:09.976 Received shutdown signal, test time was about 0.825083 seconds 00:21:09.976 00:21:09.976 Latency(us) 00:21:09.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.976 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.976 Verification LBA range: start 0x0 length 0x400 00:21:09.976 Nvme1n1 : 0.82 311.47 19.47 0.00 0.00 203026.25 23592.96 212450.62 00:21:09.976 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.976 Verification LBA range: start 0x0 length 0x400 00:21:09.976 Nvme2n1 : 0.80 238.57 14.91 0.00 0.00 259385.14 21541.40 228863.11 00:21:09.976 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.976 Verification LBA range: start 0x0 length 0x400 00:21:09.976 Nvme3n1 : 0.80 318.52 19.91 0.00 0.00 190306.62 20401.64 203332.56 00:21:09.976 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.976 Verification LBA range: start 0x0 length 0x400 00:21:09.976 Nvme4n1 : 0.81 236.22 14.76 0.00 0.00 251857.40 17552.25 264423.51 00:21:09.976 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.976 Verification LBA range: start 0x0 length 0x400 00:21:09.976 Nvme5n1 : 0.82 310.52 19.41 0.00 0.00 186631.57 18350.08 209715.20 00:21:09.976 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.976 Verification LBA range: start 0x0 length 0x400 00:21:09.976 Nvme6n1 : 0.77 166.46 10.40 0.00 0.00 339008.56 24732.72 299072.11 00:21:09.976 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.976 Verification LBA range: start 0x0 length 0x400 00:21:09.976 Nvme7n1 : 0.79 244.31 15.27 0.00 0.00 226676.20 26670.30 197861.73 00:21:09.976 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.976 Verification LBA range: start 0x0 length 0x400 00:21:09.976 Nvme8n1 : 0.77 254.46 15.90 0.00 0.00 210890.28 2806.65 223392.28 00:21:09.976 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.976 Verification LBA range: start 0x0 length 0x400 00:21:09.976 Nvme9n1 : 0.82 235.04 14.69 0.00 0.00 226802.05 25074.64 248011.02 00:21:09.976 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.976 Verification LBA range: start 0x0 length 0x400 00:21:09.976 Nvme10n1 : 0.78 246.02 15.38 0.00 0.00 209197.56 21541.40 226127.69 00:21:09.976 =================================================================================================================== 00:21:09.976 Total : 2561.59 160.10 0.00 0.00 223490.94 2806.65 299072.11 00:21:09.976 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:11.356 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2110685 00:21:11.356 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:11.356 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:11.356 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:11.356 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:11.356 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:11.356 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:11.356 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:11.357 rmmod nvme_tcp 00:21:11.357 rmmod nvme_fabrics 00:21:11.357 rmmod nvme_keyring 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2110685 ']' 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2110685 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2110685 ']' 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2110685 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2110685 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2110685' 00:21:11.357 killing process with pid 2110685 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2110685 00:21:11.357 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2110685 00:21:11.617 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:11.617 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:11.617 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:11.617 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.617 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:11.617 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.617 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.617 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:14.160 00:21:14.160 real 0m8.336s 00:21:14.160 user 0m25.886s 00:21:14.160 sys 0m1.362s 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.160 ************************************ 00:21:14.160 END TEST nvmf_shutdown_tc2 00:21:14.160 ************************************ 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:14.160 ************************************ 00:21:14.160 START TEST nvmf_shutdown_tc3 00:21:14.160 ************************************ 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.160 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:14.161 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:14.161 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:14.161 Found net devices under 0000:86:00.0: cvl_0_0 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:14.161 Found net devices under 0000:86:00.1: cvl_0_1 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:14.161 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:14.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:21:14.162 00:21:14.162 --- 10.0.0.2 ping statistics --- 00:21:14.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.162 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:14.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:21:14.162 00:21:14.162 --- 10.0.0.1 ping statistics --- 00:21:14.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.162 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2112344 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2112344 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2112344 ']' 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:14.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:14.162 [2024-07-24 19:57:05.572322] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:21:14.162 [2024-07-24 19:57:05.572362] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.162 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.162 [2024-07-24 19:57:05.634100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:14.162 [2024-07-24 19:57:05.712630] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.162 [2024-07-24 19:57:05.712669] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.162 [2024-07-24 19:57:05.712676] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.162 [2024-07-24 19:57:05.712682] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.162 [2024-07-24 19:57:05.712687] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.162 [2024-07-24 19:57:05.712797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.162 [2024-07-24 19:57:05.712817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.162 [2024-07-24 19:57:05.712933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.162 [2024-07-24 19:57:05.712933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:15.104 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:15.104 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:15.104 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:15.104 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:15.104 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:15.104 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.104 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:15.104 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.104 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:15.105 [2024-07-24 19:57:06.429498] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.105 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:15.105 Malloc1 00:21:15.105 [2024-07-24 19:57:06.525521] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.105 Malloc2 00:21:15.105 Malloc3 00:21:15.105 Malloc4 00:21:15.105 Malloc5 00:21:15.365 Malloc6 00:21:15.365 Malloc7 00:21:15.365 Malloc8 00:21:15.365 Malloc9 00:21:15.365 Malloc10 00:21:15.365 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.365 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:15.365 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:15.365 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:15.365 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2112618 00:21:15.365 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2112618 /var/tmp/bdevperf.sock 00:21:15.365 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2112618 ']' 00:21:15.365 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.365 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:15.365 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:15.365 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:15.365 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.365 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:15.365 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:15.365 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:15.366 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:15.366 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.366 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.366 { 00:21:15.366 "params": { 00:21:15.366 "name": "Nvme$subsystem", 00:21:15.366 "trtype": "$TEST_TRANSPORT", 00:21:15.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.366 "adrfam": "ipv4", 00:21:15.366 "trsvcid": "$NVMF_PORT", 00:21:15.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.366 "hdgst": ${hdgst:-false}, 00:21:15.366 "ddgst": ${ddgst:-false} 00:21:15.366 }, 00:21:15.366 "method": "bdev_nvme_attach_controller" 00:21:15.366 } 00:21:15.366 EOF 00:21:15.366 )") 00:21:15.366 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.366 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.366 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.366 { 00:21:15.366 "params": { 00:21:15.366 "name": "Nvme$subsystem", 00:21:15.366 "trtype": "$TEST_TRANSPORT", 00:21:15.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.366 "adrfam": "ipv4", 00:21:15.366 "trsvcid": "$NVMF_PORT", 00:21:15.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.366 "hdgst": ${hdgst:-false}, 00:21:15.366 "ddgst": ${ddgst:-false} 00:21:15.366 }, 00:21:15.366 "method": "bdev_nvme_attach_controller" 00:21:15.366 } 00:21:15.366 EOF 00:21:15.366 )") 00:21:15.366 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.628 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.628 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.628 { 00:21:15.628 "params": { 00:21:15.628 "name": "Nvme$subsystem", 00:21:15.628 "trtype": "$TEST_TRANSPORT", 00:21:15.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.628 "adrfam": "ipv4", 00:21:15.628 "trsvcid": "$NVMF_PORT", 00:21:15.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.628 "hdgst": ${hdgst:-false}, 00:21:15.628 "ddgst": ${ddgst:-false} 00:21:15.628 }, 00:21:15.628 "method": "bdev_nvme_attach_controller" 00:21:15.628 } 00:21:15.628 EOF 00:21:15.628 )") 00:21:15.628 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.628 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.628 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.628 { 00:21:15.628 "params": { 00:21:15.628 "name": "Nvme$subsystem", 00:21:15.628 "trtype": "$TEST_TRANSPORT", 00:21:15.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.628 "adrfam": "ipv4", 00:21:15.628 "trsvcid": "$NVMF_PORT", 00:21:15.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.628 "hdgst": ${hdgst:-false}, 00:21:15.628 "ddgst": ${ddgst:-false} 00:21:15.628 }, 00:21:15.628 "method": "bdev_nvme_attach_controller" 00:21:15.628 } 00:21:15.628 EOF 00:21:15.628 )") 00:21:15.628 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.628 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.628 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.629 { 00:21:15.629 "params": { 00:21:15.629 "name": "Nvme$subsystem", 00:21:15.629 "trtype": "$TEST_TRANSPORT", 00:21:15.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.629 "adrfam": "ipv4", 00:21:15.629 "trsvcid": "$NVMF_PORT", 00:21:15.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.629 "hdgst": ${hdgst:-false}, 00:21:15.629 "ddgst": ${ddgst:-false} 00:21:15.629 }, 00:21:15.629 "method": "bdev_nvme_attach_controller" 00:21:15.629 } 00:21:15.629 EOF 00:21:15.629 )") 00:21:15.629 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.629 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.629 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.629 { 00:21:15.629 "params": { 00:21:15.629 "name": "Nvme$subsystem", 00:21:15.629 "trtype": "$TEST_TRANSPORT", 00:21:15.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.629 "adrfam": "ipv4", 00:21:15.629 "trsvcid": "$NVMF_PORT", 00:21:15.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.629 "hdgst": ${hdgst:-false}, 00:21:15.629 "ddgst": ${ddgst:-false} 00:21:15.629 }, 00:21:15.629 "method": "bdev_nvme_attach_controller" 00:21:15.629 } 00:21:15.629 EOF 00:21:15.629 )") 00:21:15.629 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.629 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.629 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.629 { 00:21:15.629 "params": { 00:21:15.629 "name": "Nvme$subsystem", 00:21:15.629 "trtype": "$TEST_TRANSPORT", 00:21:15.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.629 "adrfam": "ipv4", 00:21:15.629 "trsvcid": "$NVMF_PORT", 00:21:15.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.629 "hdgst": ${hdgst:-false}, 00:21:15.629 "ddgst": ${ddgst:-false} 00:21:15.629 }, 00:21:15.629 "method": "bdev_nvme_attach_controller" 00:21:15.629 } 00:21:15.629 EOF 00:21:15.629 )") 00:21:15.629 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.629 [2024-07-24 19:57:06.994113] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:21:15.629 [2024-07-24 19:57:06.994165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2112618 ] 00:21:15.629 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.629 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.629 { 00:21:15.629 "params": { 00:21:15.629 "name": "Nvme$subsystem", 00:21:15.629 "trtype": "$TEST_TRANSPORT", 00:21:15.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.629 "adrfam": "ipv4", 00:21:15.629 "trsvcid": "$NVMF_PORT", 00:21:15.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.629 "hdgst": ${hdgst:-false}, 00:21:15.629 "ddgst": ${ddgst:-false} 00:21:15.629 }, 00:21:15.629 "method": "bdev_nvme_attach_controller" 00:21:15.629 } 00:21:15.629 EOF 00:21:15.629 )") 00:21:15.629 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.629 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.629 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.629 { 00:21:15.629 "params": { 00:21:15.629 "name": "Nvme$subsystem", 00:21:15.629 "trtype": "$TEST_TRANSPORT", 00:21:15.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.629 "adrfam": "ipv4", 00:21:15.629 "trsvcid": "$NVMF_PORT", 00:21:15.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.629 "hdgst": ${hdgst:-false}, 00:21:15.629 "ddgst": ${ddgst:-false} 00:21:15.629 }, 00:21:15.629 "method": "bdev_nvme_attach_controller" 00:21:15.629 } 00:21:15.629 EOF 00:21:15.629 )") 00:21:15.629 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.629 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.629 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.629 { 00:21:15.629 "params": { 00:21:15.629 "name": "Nvme$subsystem", 00:21:15.629 "trtype": "$TEST_TRANSPORT", 00:21:15.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.629 "adrfam": "ipv4", 00:21:15.629 "trsvcid": "$NVMF_PORT", 00:21:15.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.629 "hdgst": ${hdgst:-false}, 00:21:15.629 "ddgst": ${ddgst:-false} 00:21:15.629 }, 00:21:15.629 "method": "bdev_nvme_attach_controller" 00:21:15.629 } 00:21:15.629 EOF 00:21:15.629 )") 00:21:15.629 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.629 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:15.629 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.629 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:15.629 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:15.629 "params": { 00:21:15.629 "name": "Nvme1", 00:21:15.629 "trtype": "tcp", 00:21:15.629 "traddr": "10.0.0.2", 00:21:15.629 "adrfam": "ipv4", 00:21:15.629 "trsvcid": "4420", 00:21:15.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.629 "hdgst": false, 00:21:15.629 "ddgst": false 00:21:15.629 }, 00:21:15.629 "method": "bdev_nvme_attach_controller" 00:21:15.629 },{ 00:21:15.629 "params": { 00:21:15.629 "name": "Nvme2", 00:21:15.629 "trtype": "tcp", 00:21:15.629 "traddr": "10.0.0.2", 00:21:15.630 "adrfam": "ipv4", 00:21:15.630 "trsvcid": "4420", 00:21:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:15.630 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:15.630 "hdgst": false, 00:21:15.630 "ddgst": false 00:21:15.630 }, 00:21:15.630 "method": "bdev_nvme_attach_controller" 00:21:15.630 },{ 00:21:15.630 "params": { 00:21:15.630 "name": "Nvme3", 00:21:15.630 "trtype": "tcp", 00:21:15.630 "traddr": "10.0.0.2", 00:21:15.630 "adrfam": "ipv4", 00:21:15.630 "trsvcid": "4420", 00:21:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:15.630 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:15.630 "hdgst": false, 00:21:15.630 "ddgst": false 00:21:15.630 }, 00:21:15.630 "method": "bdev_nvme_attach_controller" 00:21:15.630 },{ 00:21:15.630 "params": { 00:21:15.630 "name": "Nvme4", 00:21:15.630 "trtype": "tcp", 00:21:15.630 "traddr": "10.0.0.2", 00:21:15.630 "adrfam": "ipv4", 00:21:15.630 "trsvcid": "4420", 00:21:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:15.630 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:15.630 "hdgst": false, 00:21:15.630 "ddgst": false 00:21:15.630 }, 00:21:15.630 "method": "bdev_nvme_attach_controller" 00:21:15.630 },{ 00:21:15.630 "params": { 00:21:15.630 "name": "Nvme5", 00:21:15.630 "trtype": "tcp", 00:21:15.630 "traddr": "10.0.0.2", 00:21:15.630 "adrfam": "ipv4", 00:21:15.630 "trsvcid": "4420", 00:21:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:15.630 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:15.630 "hdgst": false, 00:21:15.630 "ddgst": false 00:21:15.630 }, 00:21:15.630 "method": "bdev_nvme_attach_controller" 00:21:15.630 },{ 00:21:15.630 "params": { 00:21:15.630 "name": "Nvme6", 00:21:15.630 "trtype": "tcp", 00:21:15.630 "traddr": "10.0.0.2", 00:21:15.630 "adrfam": "ipv4", 00:21:15.630 "trsvcid": "4420", 00:21:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:15.630 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:15.630 "hdgst": false, 00:21:15.630 "ddgst": false 00:21:15.630 }, 00:21:15.630 "method": "bdev_nvme_attach_controller" 00:21:15.630 },{ 00:21:15.630 "params": { 00:21:15.630 "name": "Nvme7", 00:21:15.630 "trtype": "tcp", 00:21:15.630 "traddr": "10.0.0.2", 00:21:15.630 "adrfam": "ipv4", 00:21:15.630 "trsvcid": "4420", 00:21:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:15.630 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:15.630 "hdgst": false, 00:21:15.630 "ddgst": false 00:21:15.630 }, 00:21:15.630 "method": "bdev_nvme_attach_controller" 00:21:15.630 },{ 00:21:15.630 "params": { 00:21:15.630 "name": "Nvme8", 00:21:15.630 "trtype": "tcp", 00:21:15.630 "traddr": "10.0.0.2", 00:21:15.630 "adrfam": "ipv4", 00:21:15.630 "trsvcid": "4420", 00:21:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:15.630 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:15.630 "hdgst": false, 00:21:15.630 "ddgst": false 00:21:15.630 }, 00:21:15.630 "method": "bdev_nvme_attach_controller" 00:21:15.630 },{ 00:21:15.630 "params": { 00:21:15.630 "name": "Nvme9", 00:21:15.630 "trtype": "tcp", 00:21:15.630 "traddr": "10.0.0.2", 00:21:15.630 "adrfam": "ipv4", 00:21:15.630 "trsvcid": "4420", 00:21:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:15.630 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:15.630 "hdgst": false, 00:21:15.630 "ddgst": false 00:21:15.630 }, 00:21:15.630 "method": "bdev_nvme_attach_controller" 00:21:15.630 },{ 00:21:15.630 "params": { 00:21:15.630 "name": "Nvme10", 00:21:15.630 "trtype": "tcp", 00:21:15.630 "traddr": "10.0.0.2", 00:21:15.630 "adrfam": "ipv4", 00:21:15.630 "trsvcid": "4420", 00:21:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:15.630 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:15.630 "hdgst": false, 00:21:15.630 "ddgst": false 00:21:15.630 }, 00:21:15.630 "method": "bdev_nvme_attach_controller" 00:21:15.630 }' 00:21:15.630 [2024-07-24 19:57:07.050384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.630 [2024-07-24 19:57:07.124601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.542 Running I/O for 10 seconds... 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:17.542 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:17.823 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:17.823 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:17.823 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:17.823 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:17.823 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.823 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.823 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.823 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:17.823 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:17.823 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2112344 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2112344 ']' 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2112344 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2112344 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2112344' 00:21:18.126 killing process with pid 2112344 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2112344 00:21:18.126 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2112344 00:21:18.126 [2024-07-24 19:57:09.562194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.126 [2024-07-24 19:57:09.562456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.562636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8640 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.127 [2024-07-24 19:57:09.565860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.565868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.565874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.565880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.565886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.565892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.565898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.565903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.565909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.565915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.565921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.565927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.565933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.565939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8fc0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.566020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.128 [2024-07-24 19:57:09.566058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.128 [2024-07-24 19:57:09.566068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.128 [2024-07-24 19:57:09.566075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.128 [2024-07-24 19:57:09.566083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.128 [2024-07-24 19:57:09.566096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.128 [2024-07-24 19:57:09.566103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.128 [2024-07-24 19:57:09.566110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.128 [2024-07-24 19:57:09.566117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1539c70 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.566144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.128 [2024-07-24 19:57:09.566152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.128 [2024-07-24 19:57:09.566159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.128 [2024-07-24 19:57:09.566166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.128 [2024-07-24 19:57:09.566173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.128 [2024-07-24 19:57:09.566183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.128 [2024-07-24 19:57:09.566190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.128 [2024-07-24 19:57:09.566197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.128 [2024-07-24 19:57:09.566203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1565ee0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.566239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.128 [2024-07-24 19:57:09.566248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.128 [2024-07-24 19:57:09.566255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.128 [2024-07-24 19:57:09.566261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.128 [2024-07-24 19:57:09.566269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.128 [2024-07-24 19:57:09.566275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.128 [2024-07-24 19:57:09.566282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.128 [2024-07-24 19:57:09.566289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.128 [2024-07-24 19:57:09.566295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c9c20 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.566326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.128 [2024-07-24 19:57:09.566335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.128 [2024-07-24 19:57:09.566342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.128 [2024-07-24 19:57:09.566349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.128 [2024-07-24 19:57:09.566356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.128 [2024-07-24 19:57:09.566363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.128 [2024-07-24 19:57:09.566371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.128 [2024-07-24 19:57:09.566377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.128 [2024-07-24 19:57:09.566384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705950 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.128 [2024-07-24 19:57:09.567475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc94a0 is same with the state(5) to be set 00:21:18.129 [2024-07-24 19:57:09.567862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.567884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.567899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.567906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.567915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.567923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.567931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.567941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.567949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.567956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.567964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.567971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.567979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.567986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.567994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.129 [2024-07-24 19:57:09.568288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.129 [2024-07-24 19:57:09.568296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.130 [2024-07-24 19:57:09.568831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.130 [2024-07-24 19:57:09.568840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.131 [2024-07-24 19:57:09.568847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.131 [2024-07-24 19:57:09.569115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569244] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x161c5d0 was disconnected and fr[2024-07-24 19:57:09.569247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with teed. reset controller. 00:21:18.131 he state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.131 [2024-07-24 19:57:09.569391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.131 [2024-07-24 19:57:09.569398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.131 [2024-07-24 19:57:09.569410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:57:09.569417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.131 he state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.131 [2024-07-24 19:57:09.569433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.131 [2024-07-24 19:57:09.569440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:1[2024-07-24 19:57:09.569447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.131 he state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with t[2024-07-24 19:57:09.569456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:18.131 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.131 [2024-07-24 19:57:09.569468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.131 [2024-07-24 19:57:09.569475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.131 [2024-07-24 19:57:09.569482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.131 [2024-07-24 19:57:09.569487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.132 [2024-07-24 19:57:09.569495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.132 [2024-07-24 19:57:09.569504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with t[2024-07-24 19:57:09.569504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:1he state(5) to be set 00:21:18.132 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.132 [2024-07-24 19:57:09.569513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.132 [2024-07-24 19:57:09.569523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.132 [2024-07-24 19:57:09.569530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.132 [2024-07-24 19:57:09.569539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.132 [2024-07-24 19:57:09.569540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9810 is same with the state(5) to be set 00:21:18.132 [2024-07-24 19:57:09.569548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.132 [2024-07-24 19:57:09.569990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.132 [2024-07-24 19:57:09.569996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570434] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1586410 was disconnected and freed. reset controller. 00:21:18.133 [2024-07-24 19:57:09.570557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.133 [2024-07-24 19:57:09.570667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.133 [2024-07-24 19:57:09.570669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.133 [2024-07-24 19:57:09.570674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.134 [2024-07-24 19:57:09.570681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.134 [2024-07-24 19:57:09.570687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.134 [2024-07-24 19:57:09.570701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.134 [2024-07-24 19:57:09.570708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.134 [2024-07-24 19:57:09.570715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.134 [2024-07-24 19:57:09.570722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.134 [2024-07-24 19:57:09.570735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.134 [2024-07-24 19:57:09.570742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.570951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9cd0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.134 [2024-07-24 19:57:09.571919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.571925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.571931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.571937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.571942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.571948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.571954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.571960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.571966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.571972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.571978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.571983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.571989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.571994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca1b0 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.135 [2024-07-24 19:57:09.572905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.136 [2024-07-24 19:57:09.572910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.136 [2024-07-24 19:57:09.572916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.136 [2024-07-24 19:57:09.572922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.136 [2024-07-24 19:57:09.572930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.136 [2024-07-24 19:57:09.572936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.136 [2024-07-24 19:57:09.572941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.136 [2024-07-24 19:57:09.572974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.136 [2024-07-24 19:57:09.573027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.136 [2024-07-24 19:57:09.573088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.136 [2024-07-24 19:57:09.573137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca520 is same with the state(5) to be set 00:21:18.136 [2024-07-24 19:57:09.586322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.586980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.136 [2024-07-24 19:57:09.586991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.136 [2024-07-24 19:57:09.587001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.587474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.587550] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1535340 was disconnected and freed. reset controller. 00:21:18.137 [2024-07-24 19:57:09.588191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.588215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.588232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.588241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.588253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.588262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.588277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.588286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.588297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.588306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.588317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.588326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.588338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.588347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.588358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.588367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.588378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.588387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.588398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.588406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.588417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.588426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.588437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.588446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.588457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.588465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.137 [2024-07-24 19:57:09.588476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.137 [2024-07-24 19:57:09.588485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.588985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.588996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.589006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.589016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.589030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.589041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.589055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.589066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.589076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.589087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.589096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.589107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.589116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.589127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.589136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.589147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.589156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.589167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.589176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.589187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.589196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.589207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.589216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.589240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.589250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.138 [2024-07-24 19:57:09.589261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.138 [2024-07-24 19:57:09.589271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.589282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.139 [2024-07-24 19:57:09.589291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.589305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.139 [2024-07-24 19:57:09.589314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.589325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.139 [2024-07-24 19:57:09.589335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.589346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.139 [2024-07-24 19:57:09.589356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.589367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.139 [2024-07-24 19:57:09.589376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.589388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.139 [2024-07-24 19:57:09.589397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.589409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.139 [2024-07-24 19:57:09.589419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.589430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.139 [2024-07-24 19:57:09.589439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.589450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.139 [2024-07-24 19:57:09.589460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.589472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.139 [2024-07-24 19:57:09.589481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.589492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.139 [2024-07-24 19:57:09.589502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.589513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.139 [2024-07-24 19:57:09.589523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.589551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.139 [2024-07-24 19:57:09.589609] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x158bd90 was disconnected and freed. reset controller. 00:21:18.139 [2024-07-24 19:57:09.590926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:18.139 [2024-07-24 19:57:09.590954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c9c20 (9): Bad file descriptor 00:21:18.139 [2024-07-24 19:57:09.591012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.591025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.591036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.591050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.591060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.591070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.591079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.591089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.591101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702c50 is same with the state(5) to be set 00:21:18.139 [2024-07-24 19:57:09.591123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1539c70 (9): Bad file descriptor 00:21:18.139 [2024-07-24 19:57:09.591143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1565ee0 (9): Bad file descriptor 00:21:18.139 [2024-07-24 19:57:09.591176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.591187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.591198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.591207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.591217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.591227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.591236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.591246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.591255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd880 is same with the state(5) to be set 00:21:18.139 [2024-07-24 19:57:09.591286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.591297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.591307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.591316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.591327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.591338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.591348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.591358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.591366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15662c0 is same with the state(5) to be set 00:21:18.139 [2024-07-24 19:57:09.591393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.591404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.591414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.591423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.597795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.597811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.597822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.597832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.597840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155ca50 is same with the state(5) to be set 00:21:18.139 [2024-07-24 19:57:09.597860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1705950 (9): Bad file descriptor 00:21:18.139 [2024-07-24 19:57:09.597889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.597902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.597912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.139 [2024-07-24 19:57:09.597922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.139 [2024-07-24 19:57:09.597932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.140 [2024-07-24 19:57:09.597942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.597952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.140 [2024-07-24 19:57:09.597961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.597970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f6700 is same with the state(5) to be set 00:21:18.140 [2024-07-24 19:57:09.598001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.140 [2024-07-24 19:57:09.598013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.598024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.140 [2024-07-24 19:57:09.598037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.598052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.140 [2024-07-24 19:57:09.598061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.598071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.140 [2024-07-24 19:57:09.598080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.598089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702240 is same with the state(5) to be set 00:21:18.140 [2024-07-24 19:57:09.602344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:18.140 [2024-07-24 19:57:09.602388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:18.140 [2024-07-24 19:57:09.602406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15662c0 (9): Bad file descriptor 00:21:18.140 [2024-07-24 19:57:09.602469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1702c50 (9): Bad file descriptor 00:21:18.140 [2024-07-24 19:57:09.602501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dd880 (9): Bad file descriptor 00:21:18.140 [2024-07-24 19:57:09.602526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155ca50 (9): Bad file descriptor 00:21:18.140 [2024-07-24 19:57:09.602548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f6700 (9): Bad file descriptor 00:21:18.140 [2024-07-24 19:57:09.602563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1702240 (9): Bad file descriptor 00:21:18.140 [2024-07-24 19:57:09.603269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:18.140 [2024-07-24 19:57:09.603789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.140 [2024-07-24 19:57:09.603811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c9c20 with addr=10.0.0.2, port=4420 00:21:18.140 [2024-07-24 19:57:09.603822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c9c20 is same with the state(5) to be set 00:21:18.140 [2024-07-24 19:57:09.604224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.140 [2024-07-24 19:57:09.604239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1705950 with addr=10.0.0.2, port=4420 00:21:18.140 [2024-07-24 19:57:09.604248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705950 is same with the state(5) to be set 00:21:18.140 [2024-07-24 19:57:09.604312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.140 [2024-07-24 19:57:09.604705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.140 [2024-07-24 19:57:09.604714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.604725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.604735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.604746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.604755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.604767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.604776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.604787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.604797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.604808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.604818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.604829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.604838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.604850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.604859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.604871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.604880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.604891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.604900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.604913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.604923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.604934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.604943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.604955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.604964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.604976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.604985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.604997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.141 [2024-07-24 19:57:09.605368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.141 [2024-07-24 19:57:09.605377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.605389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.605398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.605409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.605419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.605430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.605439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.605452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.605462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.605474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.605483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.605494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.605503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.605515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.605525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.605537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.605546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.605557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.605566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.605577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.605587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.605599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.605608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.605619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.605628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.605640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.605649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.605660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.605669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.142 [2024-07-24 19:57:09.607865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.142 [2024-07-24 19:57:09.607874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.607886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.607895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.607906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.607915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.607927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.607936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.607947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.607956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.607968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.607977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.607989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.143 [2024-07-24 19:57:09.608665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.143 [2024-07-24 19:57:09.608674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.608685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.608695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.608706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.608715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.608726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.608735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.608747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.608756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.608767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.608776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.610655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:18.144 [2024-07-24 19:57:09.610686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:18.144 [2024-07-24 19:57:09.611103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.144 [2024-07-24 19:57:09.611118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15662c0 with addr=10.0.0.2, port=4420 00:21:18.144 [2024-07-24 19:57:09.611127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15662c0 is same with the state(5) to be set 00:21:18.144 [2024-07-24 19:57:09.611725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.144 [2024-07-24 19:57:09.611736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1702240 with addr=10.0.0.2, port=4420 00:21:18.144 [2024-07-24 19:57:09.611744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702240 is same with the state(5) to be set 00:21:18.144 [2024-07-24 19:57:09.611753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c9c20 (9): Bad file descriptor 00:21:18.144 [2024-07-24 19:57:09.611763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1705950 (9): Bad file descriptor 00:21:18.144 [2024-07-24 19:57:09.611833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.611844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.611858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.611865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.611874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.611881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.611890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.611897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.611905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.611912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.611921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.611928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.611936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.611943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.611952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.611959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.611968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.611978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.611987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.611994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.144 [2024-07-24 19:57:09.612284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.144 [2024-07-24 19:57:09.612290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.145 [2024-07-24 19:57:09.612784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.145 [2024-07-24 19:57:09.612793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.612800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.612809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.612816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.612824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.612831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.612840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.612846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.612855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.612862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.612869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1588ec0 is same with the state(5) to be set 00:21:18.146 [2024-07-24 19:57:09.612924] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1588ec0 was disconnected and freed. reset controller. 00:21:18.146 [2024-07-24 19:57:09.612983] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:18.146 [2024-07-24 19:57:09.613031] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:18.146 [2024-07-24 19:57:09.613085] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:18.146 [2024-07-24 19:57:09.614101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.146 [2024-07-24 19:57:09.614116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1539c70 with addr=10.0.0.2, port=4420 00:21:18.146 [2024-07-24 19:57:09.614125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1539c70 is same with the state(5) to be set 00:21:18.146 [2024-07-24 19:57:09.614448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.146 [2024-07-24 19:57:09.614459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1565ee0 with addr=10.0.0.2, port=4420 00:21:18.146 [2024-07-24 19:57:09.614467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1565ee0 is same with the state(5) to be set 00:21:18.146 [2024-07-24 19:57:09.614476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15662c0 (9): Bad file descriptor 00:21:18.146 [2024-07-24 19:57:09.614485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1702240 (9): Bad file descriptor 00:21:18.146 [2024-07-24 19:57:09.614495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:18.146 [2024-07-24 19:57:09.614501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:18.146 [2024-07-24 19:57:09.614510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:18.146 [2024-07-24 19:57:09.614523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:18.146 [2024-07-24 19:57:09.614530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:18.146 [2024-07-24 19:57:09.614541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:18.146 [2024-07-24 19:57:09.614567] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:18.146 [2024-07-24 19:57:09.614585] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:18.146 [2024-07-24 19:57:09.614596] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:18.146 [2024-07-24 19:57:09.616098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.146 [2024-07-24 19:57:09.616113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.146 [2024-07-24 19:57:09.616131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:18.146 [2024-07-24 19:57:09.616150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1539c70 (9): Bad file descriptor 00:21:18.146 [2024-07-24 19:57:09.616159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1565ee0 (9): Bad file descriptor 00:21:18.146 [2024-07-24 19:57:09.616167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:18.146 [2024-07-24 19:57:09.616173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:18.146 [2024-07-24 19:57:09.616184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:18.146 [2024-07-24 19:57:09.616194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:18.146 [2024-07-24 19:57:09.616200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:18.146 [2024-07-24 19:57:09.616207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:18.146 [2024-07-24 19:57:09.616219] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:18.146 [2024-07-24 19:57:09.616229] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:18.146 [2024-07-24 19:57:09.616311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.146 [2024-07-24 19:57:09.616582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.146 [2024-07-24 19:57:09.616589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.616984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.616994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.617001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.617009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.617016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.617024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.617031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.617040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.617056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.617065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.617072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.617081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.617088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.617097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.617103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.617112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.617119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.617127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.617135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.617143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.617150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.617159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.617166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.617174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.147 [2024-07-24 19:57:09.617181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.147 [2024-07-24 19:57:09.617189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.617198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.617207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.617214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.617222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.617229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.617238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.617245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.617253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.617260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.617268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.617275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.617284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.617291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.617299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.617306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.617315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.617321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.617329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589e80 is same with the state(5) to be set 00:21:18.148 [2024-07-24 19:57:09.618408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.148 [2024-07-24 19:57:09.618847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.148 [2024-07-24 19:57:09.618855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.618864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.618873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.618880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.618888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.618895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.618903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.618910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.618918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.618925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.618934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.618941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.618949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.618956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.618965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.618972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.618980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.618987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.618995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.619416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.619423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169bfd0 is same with the state(5) to be set 00:21:18.149 [2024-07-24 19:57:09.620422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.620433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.149 [2024-07-24 19:57:09.620446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.149 [2024-07-24 19:57:09.620455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.150 [2024-07-24 19:57:09.620945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.150 [2024-07-24 19:57:09.620953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.620959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.620967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.620974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.620982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.620988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.620997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.151 [2024-07-24 19:57:09.621385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.151 [2024-07-24 19:57:09.621392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a9b0 is same with the state(5) to be set 00:21:18.151 [2024-07-24 19:57:09.622615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.151 [2024-07-24 19:57:09.622627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.151 [2024-07-24 19:57:09.622634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:18.151 [2024-07-24 19:57:09.622644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:18.151 task offset: 27392 on job bdev=Nvme10n1 fails 00:21:18.151 00:21:18.151 Latency(us) 00:21:18.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.151 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.151 Job: Nvme1n1 ended in about 0.90 seconds with error 00:21:18.151 Verification LBA range: start 0x0 length 0x400 00:21:18.151 Nvme1n1 : 0.90 212.42 13.28 70.81 0.00 223707.94 21655.37 206067.98 00:21:18.151 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.151 Job: Nvme2n1 ended in about 0.90 seconds with error 00:21:18.151 Verification LBA range: start 0x0 length 0x400 00:21:18.151 Nvme2n1 : 0.90 142.80 8.92 71.40 0.00 290584.78 22567.18 306366.55 00:21:18.151 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.151 Job: Nvme3n1 ended in about 0.91 seconds with error 00:21:18.151 Verification LBA range: start 0x0 length 0x400 00:21:18.151 Nvme3n1 : 0.91 141.13 8.82 70.56 0.00 288796.49 20743.57 251658.24 00:21:18.151 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.151 Job: Nvme4n1 ended in about 0.91 seconds with error 00:21:18.151 Verification LBA range: start 0x0 length 0x400 00:21:18.151 Nvme4n1 : 0.91 140.17 8.76 70.09 0.00 285593.30 40119.43 286306.84 00:21:18.151 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.151 Job: Nvme5n1 ended in about 0.90 seconds with error 00:21:18.151 Verification LBA range: start 0x0 length 0x400 00:21:18.152 Nvme5n1 : 0.90 213.86 13.37 71.29 0.00 206343.12 22225.25 224304.08 00:21:18.152 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.152 Job: Nvme6n1 ended in about 0.92 seconds with error 00:21:18.152 Verification LBA range: start 0x0 length 0x400 00:21:18.152 Nvme6n1 : 0.92 209.74 13.11 69.91 0.00 206865.14 20971.52 262599.90 00:21:18.152 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.152 Job: Nvme7n1 ended in about 0.92 seconds with error 00:21:18.152 Verification LBA range: start 0x0 length 0x400 00:21:18.152 Nvme7n1 : 0.92 139.51 8.72 69.76 0.00 271269.18 21085.50 244363.80 00:21:18.152 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.152 Job: Nvme8n1 ended in about 0.92 seconds with error 00:21:18.152 Verification LBA range: start 0x0 length 0x400 00:21:18.152 Nvme8n1 : 0.92 208.82 13.05 69.61 0.00 199935.78 20743.57 223392.28 00:21:18.152 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.152 Job: Nvme9n1 ended in about 0.90 seconds with error 00:21:18.152 Verification LBA range: start 0x0 length 0x400 00:21:18.152 Nvme9n1 : 0.90 213.52 13.35 71.17 0.00 190905.21 22111.28 227039.50 00:21:18.152 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:18.152 Job: Nvme10n1 ended in about 0.89 seconds with error 00:21:18.152 Verification LBA range: start 0x0 length 0x400 00:21:18.152 Nvme10n1 : 0.89 216.26 13.52 72.09 0.00 184200.24 21655.37 237069.36 00:21:18.152 =================================================================================================================== 00:21:18.152 Total : 1838.24 114.89 706.68 0.00 229348.92 20743.57 306366.55 00:21:18.152 [2024-07-24 19:57:09.643919] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:18.152 [2024-07-24 19:57:09.643960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:18.152 [2024-07-24 19:57:09.644437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.152 [2024-07-24 19:57:09.644454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155ca50 with addr=10.0.0.2, port=4420 00:21:18.152 [2024-07-24 19:57:09.644464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155ca50 is same with the state(5) to be set 00:21:18.152 [2024-07-24 19:57:09.644472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:18.152 [2024-07-24 19:57:09.644479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:18.152 [2024-07-24 19:57:09.644487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:18.152 [2024-07-24 19:57:09.644501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:18.152 [2024-07-24 19:57:09.644508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:18.152 [2024-07-24 19:57:09.644515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:18.152 [2024-07-24 19:57:09.644539] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:18.152 [2024-07-24 19:57:09.644552] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:18.152 [2024-07-24 19:57:09.645111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.152 [2024-07-24 19:57:09.645125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.152 [2024-07-24 19:57:09.645587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.152 [2024-07-24 19:57:09.645601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16dd880 with addr=10.0.0.2, port=4420 00:21:18.152 [2024-07-24 19:57:09.645609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd880 is same with the state(5) to be set 00:21:18.152 [2024-07-24 19:57:09.646009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.152 [2024-07-24 19:57:09.646019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f6700 with addr=10.0.0.2, port=4420 00:21:18.152 [2024-07-24 19:57:09.646026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f6700 is same with the state(5) to be set 00:21:18.152 [2024-07-24 19:57:09.646423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.152 [2024-07-24 19:57:09.646435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1702c50 with addr=10.0.0.2, port=4420 00:21:18.152 [2024-07-24 19:57:09.646442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702c50 is same with the state(5) to be set 00:21:18.152 [2024-07-24 19:57:09.646455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155ca50 (9): Bad file descriptor 00:21:18.152 [2024-07-24 19:57:09.646495] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:18.152 [2024-07-24 19:57:09.647252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dd880 (9): Bad file descriptor 00:21:18.152 [2024-07-24 19:57:09.647267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f6700 (9): Bad file descriptor 00:21:18.152 [2024-07-24 19:57:09.647276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1702c50 (9): Bad file descriptor 00:21:18.152 [2024-07-24 19:57:09.647285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:18.152 [2024-07-24 19:57:09.647292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:18.152 [2024-07-24 19:57:09.647303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:18.152 [2024-07-24 19:57:09.647358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:18.152 [2024-07-24 19:57:09.647368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:18.152 [2024-07-24 19:57:09.647376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:18.152 [2024-07-24 19:57:09.647385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:18.152 [2024-07-24 19:57:09.647392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:18.152 [2024-07-24 19:57:09.647400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:18.152 [2024-07-24 19:57:09.647407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.152 [2024-07-24 19:57:09.647444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:18.152 [2024-07-24 19:57:09.647450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:18.152 [2024-07-24 19:57:09.647456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:18.152 [2024-07-24 19:57:09.647465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:18.152 [2024-07-24 19:57:09.647471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:18.152 [2024-07-24 19:57:09.647477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:18.152 [2024-07-24 19:57:09.647486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:18.152 [2024-07-24 19:57:09.647491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:18.152 [2024-07-24 19:57:09.647498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:18.152 [2024-07-24 19:57:09.647959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.152 [2024-07-24 19:57:09.647971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.152 [2024-07-24 19:57:09.647977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.152 [2024-07-24 19:57:09.648330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.152 [2024-07-24 19:57:09.648344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1705950 with addr=10.0.0.2, port=4420 00:21:18.152 [2024-07-24 19:57:09.648352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705950 is same with the state(5) to be set 00:21:18.152 [2024-07-24 19:57:09.648770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.152 [2024-07-24 19:57:09.648780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c9c20 with addr=10.0.0.2, port=4420 00:21:18.152 [2024-07-24 19:57:09.648787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c9c20 is same with the state(5) to be set 00:21:18.152 [2024-07-24 19:57:09.649118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.152 [2024-07-24 19:57:09.649129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1702240 with addr=10.0.0.2, port=4420 00:21:18.152 [2024-07-24 19:57:09.649136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702240 is same with the state(5) to be set 00:21:18.152 [2024-07-24 19:57:09.649494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.152 [2024-07-24 19:57:09.649504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15662c0 with addr=10.0.0.2, port=4420 00:21:18.152 [2024-07-24 19:57:09.649514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15662c0 is same with the state(5) to be set 00:21:18.152 [2024-07-24 19:57:09.649840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.152 [2024-07-24 19:57:09.649851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1565ee0 with addr=10.0.0.2, port=4420 00:21:18.152 [2024-07-24 19:57:09.649857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1565ee0 is same with the state(5) to be set 00:21:18.152 [2024-07-24 19:57:09.650304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.152 [2024-07-24 19:57:09.650315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1539c70 with addr=10.0.0.2, port=4420 00:21:18.152 [2024-07-24 19:57:09.650322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1539c70 is same with the state(5) to be set 00:21:18.152 [2024-07-24 19:57:09.650354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1705950 (9): Bad file descriptor 00:21:18.152 [2024-07-24 19:57:09.650365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c9c20 (9): Bad file descriptor 00:21:18.152 [2024-07-24 19:57:09.650373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1702240 (9): Bad file descriptor 00:21:18.152 [2024-07-24 19:57:09.650381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15662c0 (9): Bad file descriptor 00:21:18.153 [2024-07-24 19:57:09.650389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1565ee0 (9): Bad file descriptor 00:21:18.153 [2024-07-24 19:57:09.650397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1539c70 (9): Bad file descriptor 00:21:18.153 [2024-07-24 19:57:09.650432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:18.153 [2024-07-24 19:57:09.650440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:18.153 [2024-07-24 19:57:09.650447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:18.153 [2024-07-24 19:57:09.650456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:18.153 [2024-07-24 19:57:09.650462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:18.153 [2024-07-24 19:57:09.650468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:18.153 [2024-07-24 19:57:09.650476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:18.153 [2024-07-24 19:57:09.650482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:18.153 [2024-07-24 19:57:09.650488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:18.153 [2024-07-24 19:57:09.650497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:18.153 [2024-07-24 19:57:09.650503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:18.153 [2024-07-24 19:57:09.650509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:18.153 [2024-07-24 19:57:09.650517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:18.153 [2024-07-24 19:57:09.650523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:18.153 [2024-07-24 19:57:09.650529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:18.153 [2024-07-24 19:57:09.650538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:18.153 [2024-07-24 19:57:09.650547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:18.153 [2024-07-24 19:57:09.650554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:18.153 [2024-07-24 19:57:09.650579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.153 [2024-07-24 19:57:09.650585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.153 [2024-07-24 19:57:09.650591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.153 [2024-07-24 19:57:09.650596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.153 [2024-07-24 19:57:09.650602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.153 [2024-07-24 19:57:09.650607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.413 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:18.413 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:19.796 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2112618 00:21:19.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2112618) - No such process 00:21:19.796 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:19.796 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:19.796 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:19.796 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:19.796 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:19.796 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:19.796 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:19.796 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:19.796 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.796 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:19.796 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.796 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.796 rmmod nvme_tcp 00:21:19.796 rmmod nvme_fabrics 00:21:19.796 rmmod nvme_keyring 00:21:19.796 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.797 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:19.797 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:19.797 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:19.797 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:19.797 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:19.797 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:19.797 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.797 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.797 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.797 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.797 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.707 19:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:21.707 00:21:21.707 real 0m7.910s 00:21:21.707 user 0m19.714s 00:21:21.707 sys 0m1.323s 00:21:21.707 19:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:21.707 19:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.707 ************************************ 00:21:21.707 END TEST nvmf_shutdown_tc3 00:21:21.707 ************************************ 00:21:21.707 19:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:21.707 00:21:21.707 real 0m31.381s 00:21:21.707 user 1m19.559s 00:21:21.707 sys 0m8.275s 00:21:21.707 19:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:21.707 19:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:21.707 ************************************ 00:21:21.707 END TEST nvmf_shutdown 00:21:21.707 ************************************ 00:21:21.707 19:57:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:21:21.707 00:21:21.707 real 10m45.312s 00:21:21.707 user 24m14.865s 00:21:21.707 sys 2m58.292s 00:21:21.707 19:57:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:21.707 19:57:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:21.708 ************************************ 00:21:21.708 END TEST nvmf_target_extra 00:21:21.708 ************************************ 00:21:21.708 19:57:13 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:21.708 19:57:13 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:21.708 19:57:13 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:21.708 19:57:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:21.708 ************************************ 00:21:21.708 START TEST nvmf_host 00:21:21.708 ************************************ 00:21:21.708 19:57:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:21.966 * Looking for test storage... 00:21:21.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:21.966 19:57:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.966 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:21.966 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.966 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.966 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.966 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.966 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.966 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.966 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.966 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.966 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.966 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.966 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.966 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.966 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.966 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.967 ************************************ 00:21:21.967 START TEST nvmf_multicontroller 00:21:21.967 ************************************ 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:21.967 * Looking for test storage... 00:21:21.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:21.967 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:21.968 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.968 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.968 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.968 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:21.968 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:21.968 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.968 19:57:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:27.253 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:27.253 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.253 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:27.254 Found net devices under 0000:86:00.0: cvl_0_0 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:27.254 Found net devices under 0000:86:00.1: cvl_0_1 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:27.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:21:27.254 00:21:27.254 --- 10.0.0.2 ping statistics --- 00:21:27.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.254 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:21:27.254 00:21:27.254 --- 10.0.0.1 ping statistics --- 00:21:27.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.254 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2117225 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2117225 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2117225 ']' 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:27.254 19:57:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.254 [2024-07-24 19:57:18.684479] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:21:27.254 [2024-07-24 19:57:18.684528] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.254 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.254 [2024-07-24 19:57:18.745263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:27.254 [2024-07-24 19:57:18.818514] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.254 [2024-07-24 19:57:18.818553] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.254 [2024-07-24 19:57:18.818560] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.254 [2024-07-24 19:57:18.818566] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.254 [2024-07-24 19:57:18.818571] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.254 [2024-07-24 19:57:18.818672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.254 [2024-07-24 19:57:18.818760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.254 [2024-07-24 19:57:18.818761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.194 [2024-07-24 19:57:19.539318] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.194 Malloc0 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.194 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.195 [2024-07-24 19:57:19.598177] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.195 [2024-07-24 19:57:19.606087] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.195 Malloc1 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2117272 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2117272 /var/tmp/bdevperf.sock 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2117272 ']' 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:28.195 19:57:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.134 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:29.134 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:21:29.134 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:29.134 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.134 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.395 NVMe0n1 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.395 1 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.395 request: 00:21:29.395 { 00:21:29.395 "name": "NVMe0", 00:21:29.395 "trtype": "tcp", 00:21:29.395 "traddr": "10.0.0.2", 00:21:29.395 "adrfam": "ipv4", 00:21:29.395 "trsvcid": "4420", 00:21:29.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.395 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:29.395 "hostaddr": "10.0.0.2", 00:21:29.395 "hostsvcid": "60000", 00:21:29.395 "prchk_reftag": false, 00:21:29.395 "prchk_guard": false, 00:21:29.395 "hdgst": false, 00:21:29.395 "ddgst": false, 00:21:29.395 "method": "bdev_nvme_attach_controller", 00:21:29.395 "req_id": 1 00:21:29.395 } 00:21:29.395 Got JSON-RPC error response 00:21:29.395 response: 00:21:29.395 { 00:21:29.395 "code": -114, 00:21:29.395 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:29.395 } 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:29.395 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.396 request: 00:21:29.396 { 00:21:29.396 "name": "NVMe0", 00:21:29.396 "trtype": "tcp", 00:21:29.396 "traddr": "10.0.0.2", 00:21:29.396 "adrfam": "ipv4", 00:21:29.396 "trsvcid": "4420", 00:21:29.396 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:29.396 "hostaddr": "10.0.0.2", 00:21:29.396 "hostsvcid": "60000", 00:21:29.396 "prchk_reftag": false, 00:21:29.396 "prchk_guard": false, 00:21:29.396 "hdgst": false, 00:21:29.396 "ddgst": false, 00:21:29.396 "method": "bdev_nvme_attach_controller", 00:21:29.396 "req_id": 1 00:21:29.396 } 00:21:29.396 Got JSON-RPC error response 00:21:29.396 response: 00:21:29.396 { 00:21:29.396 "code": -114, 00:21:29.396 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:29.396 } 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.396 request: 00:21:29.396 { 00:21:29.396 "name": "NVMe0", 00:21:29.396 "trtype": "tcp", 00:21:29.396 "traddr": "10.0.0.2", 00:21:29.396 "adrfam": "ipv4", 00:21:29.396 "trsvcid": "4420", 00:21:29.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.396 "hostaddr": "10.0.0.2", 00:21:29.396 "hostsvcid": "60000", 00:21:29.396 "prchk_reftag": false, 00:21:29.396 "prchk_guard": false, 00:21:29.396 "hdgst": false, 00:21:29.396 "ddgst": false, 00:21:29.396 "multipath": "disable", 00:21:29.396 "method": "bdev_nvme_attach_controller", 00:21:29.396 "req_id": 1 00:21:29.396 } 00:21:29.396 Got JSON-RPC error response 00:21:29.396 response: 00:21:29.396 { 00:21:29.396 "code": -114, 00:21:29.396 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:29.396 } 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.396 request: 00:21:29.396 { 00:21:29.396 "name": "NVMe0", 00:21:29.396 "trtype": "tcp", 00:21:29.396 "traddr": "10.0.0.2", 00:21:29.396 "adrfam": "ipv4", 00:21:29.396 "trsvcid": "4420", 00:21:29.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.396 "hostaddr": "10.0.0.2", 00:21:29.396 "hostsvcid": "60000", 00:21:29.396 "prchk_reftag": false, 00:21:29.396 "prchk_guard": false, 00:21:29.396 "hdgst": false, 00:21:29.396 "ddgst": false, 00:21:29.396 "multipath": "failover", 00:21:29.396 "method": "bdev_nvme_attach_controller", 00:21:29.396 "req_id": 1 00:21:29.396 } 00:21:29.396 Got JSON-RPC error response 00:21:29.396 response: 00:21:29.396 { 00:21:29.396 "code": -114, 00:21:29.396 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:29.396 } 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.396 19:57:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.656 00:21:29.656 19:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.656 19:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:29.656 19:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.656 19:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.656 19:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.657 19:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:29.657 19:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.657 19:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.657 00:21:29.657 19:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.657 19:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:29.657 19:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.657 19:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:29.657 19:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.657 19:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.657 19:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:29.657 19:57:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:31.036 0 00:21:31.036 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:31.036 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.036 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.036 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.036 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2117272 00:21:31.036 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2117272 ']' 00:21:31.036 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2117272 00:21:31.036 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:21:31.036 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:31.036 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2117272 00:21:31.036 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:31.036 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:31.036 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2117272' 00:21:31.036 killing process with pid 2117272 00:21:31.036 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2117272 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2117272 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:21:31.037 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:31.037 [2024-07-24 19:57:19.710950] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:21:31.037 [2024-07-24 19:57:19.710999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117272 ] 00:21:31.037 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.037 [2024-07-24 19:57:19.765457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.037 [2024-07-24 19:57:19.841496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.037 [2024-07-24 19:57:21.184816] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 52280566-7e82-410e-b2ac-8bffa012f0bf already exists 00:21:31.037 [2024-07-24 19:57:21.184844] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:52280566-7e82-410e-b2ac-8bffa012f0bf alias for bdev NVMe1n1 00:21:31.037 [2024-07-24 19:57:21.184851] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:31.037 Running I/O for 1 seconds... 00:21:31.037 00:21:31.037 Latency(us) 00:21:31.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.037 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:31.037 NVMe0n1 : 1.01 23239.20 90.78 0.00 0.00 5484.62 3063.10 31001.38 00:21:31.037 =================================================================================================================== 00:21:31.037 Total : 23239.20 90.78 0.00 0.00 5484.62 3063.10 31001.38 00:21:31.037 Received shutdown signal, test time was about 1.000000 seconds 00:21:31.037 00:21:31.037 Latency(us) 00:21:31.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.037 =================================================================================================================== 00:21:31.037 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.037 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:31.037 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:31.037 rmmod nvme_tcp 00:21:31.037 rmmod nvme_fabrics 00:21:31.296 rmmod nvme_keyring 00:21:31.296 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:31.296 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:31.296 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:31.296 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2117225 ']' 00:21:31.296 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2117225 00:21:31.296 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2117225 ']' 00:21:31.296 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2117225 00:21:31.296 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:21:31.297 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:31.297 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2117225 00:21:31.297 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:31.297 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:31.297 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2117225' 00:21:31.297 killing process with pid 2117225 00:21:31.297 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2117225 00:21:31.297 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2117225 00:21:31.556 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:31.556 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:31.556 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:31.556 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:31.556 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:31.556 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.556 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.556 19:57:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.465 19:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:33.465 00:21:33.465 real 0m11.598s 00:21:33.465 user 0m16.786s 00:21:33.465 sys 0m4.714s 00:21:33.465 19:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:33.465 19:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:33.465 ************************************ 00:21:33.465 END TEST nvmf_multicontroller 00:21:33.465 ************************************ 00:21:33.465 19:57:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:33.465 19:57:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:33.465 19:57:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:33.465 19:57:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.465 ************************************ 00:21:33.465 START TEST nvmf_aer 00:21:33.465 ************************************ 00:21:33.465 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:33.725 * Looking for test storage... 00:21:33.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.725 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:33.726 19:57:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:39.049 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:39.049 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:39.049 Found net devices under 0000:86:00.0: cvl_0_0 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.049 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:39.050 Found net devices under 0000:86:00.1: cvl_0_1 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:39.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:21:39.050 00:21:39.050 --- 10.0.0.2 ping statistics --- 00:21:39.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.050 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:21:39.050 19:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:21:39.050 00:21:39.050 --- 10.0.0.1 ping statistics --- 00:21:39.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.050 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2121247 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2121247 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2121247 ']' 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:39.050 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.050 [2024-07-24 19:57:30.071676] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:21:39.050 [2024-07-24 19:57:30.071723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.050 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.050 [2024-07-24 19:57:30.128455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:39.050 [2024-07-24 19:57:30.206015] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.050 [2024-07-24 19:57:30.206057] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.050 [2024-07-24 19:57:30.206064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.050 [2024-07-24 19:57:30.206071] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.050 [2024-07-24 19:57:30.206078] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.050 [2024-07-24 19:57:30.206123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.050 [2024-07-24 19:57:30.206142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.050 [2024-07-24 19:57:30.206249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.050 [2024-07-24 19:57:30.206250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.311 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:39.311 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:21:39.311 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:39.311 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:39.311 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.571 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.571 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.572 [2024-07-24 19:57:30.917306] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.572 Malloc0 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.572 [2024-07-24 19:57:30.968977] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.572 [ 00:21:39.572 { 00:21:39.572 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:39.572 "subtype": "Discovery", 00:21:39.572 "listen_addresses": [], 00:21:39.572 "allow_any_host": true, 00:21:39.572 "hosts": [] 00:21:39.572 }, 00:21:39.572 { 00:21:39.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.572 "subtype": "NVMe", 00:21:39.572 "listen_addresses": [ 00:21:39.572 { 00:21:39.572 "trtype": "TCP", 00:21:39.572 "adrfam": "IPv4", 00:21:39.572 "traddr": "10.0.0.2", 00:21:39.572 "trsvcid": "4420" 00:21:39.572 } 00:21:39.572 ], 00:21:39.572 "allow_any_host": true, 00:21:39.572 "hosts": [], 00:21:39.572 "serial_number": "SPDK00000000000001", 00:21:39.572 "model_number": "SPDK bdev Controller", 00:21:39.572 "max_namespaces": 2, 00:21:39.572 "min_cntlid": 1, 00:21:39.572 "max_cntlid": 65519, 00:21:39.572 "namespaces": [ 00:21:39.572 { 00:21:39.572 "nsid": 1, 00:21:39.572 "bdev_name": "Malloc0", 00:21:39.572 "name": "Malloc0", 00:21:39.572 "nguid": "16493C7D40894561B70EBBFD81499510", 00:21:39.572 "uuid": "16493c7d-4089-4561-b70e-bbfd81499510" 00:21:39.572 } 00:21:39.572 ] 00:21:39.572 } 00:21:39.572 ] 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2121290 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:39.572 19:57:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:39.572 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.572 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:39.572 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:39.572 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:39.572 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.833 Malloc1 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.833 [ 00:21:39.833 { 00:21:39.833 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:39.833 "subtype": "Discovery", 00:21:39.833 "listen_addresses": [], 00:21:39.833 "allow_any_host": true, 00:21:39.833 "hosts": [] 00:21:39.833 }, 00:21:39.833 { 00:21:39.833 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.833 "subtype": "NVMe", 00:21:39.833 "listen_addresses": [ 00:21:39.833 { 00:21:39.833 "trtype": "TCP", 00:21:39.833 "adrfam": "IPv4", 00:21:39.833 "traddr": "10.0.0.2", 00:21:39.833 "trsvcid": "4420" 00:21:39.833 } 00:21:39.833 ], 00:21:39.833 "allow_any_host": true, 00:21:39.833 "hosts": [], 00:21:39.833 "serial_number": "SPDK00000000000001", 00:21:39.833 "model_number": "SPDK bdev Controller", 00:21:39.833 "max_namespaces": 2, 00:21:39.833 "min_cntlid": 1, 00:21:39.833 "max_cntlid": 65519, 00:21:39.833 "namespaces": [ 00:21:39.833 { 00:21:39.833 "nsid": 1, 00:21:39.833 "bdev_name": "Malloc0", 00:21:39.833 "name": "Malloc0", 00:21:39.833 "nguid": "16493C7D40894561B70EBBFD81499510", 00:21:39.833 "uuid": "16493c7d-4089-4561-b70e-bbfd81499510" 00:21:39.833 }, 00:21:39.833 { 00:21:39.833 "nsid": 2, 00:21:39.833 "bdev_name": "Malloc1", 00:21:39.833 "name": "Malloc1", 00:21:39.833 "nguid": "1194FB14F3314788A0D69079D7D7C521", 00:21:39.833 "uuid": "1194fb14-f331-4788-a0d6-9079d7d7c521" 00:21:39.833 } 00:21:39.833 ] 00:21:39.833 } 00:21:39.833 ] 00:21:39.833 Asynchronous Event Request test 00:21:39.833 Attaching to 10.0.0.2 00:21:39.833 Attached to 10.0.0.2 00:21:39.833 Registering asynchronous event callbacks... 00:21:39.833 Starting namespace attribute notice tests for all controllers... 00:21:39.833 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:39.833 aer_cb - Changed Namespace 00:21:39.833 Cleaning up... 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2121290 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:39.833 rmmod nvme_tcp 00:21:39.833 rmmod nvme_fabrics 00:21:39.833 rmmod nvme_keyring 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2121247 ']' 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2121247 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2121247 ']' 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2121247 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:21:39.833 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:39.834 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2121247 00:21:39.834 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:39.834 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:39.834 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2121247' 00:21:39.834 killing process with pid 2121247 00:21:39.834 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2121247 00:21:39.834 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2121247 00:21:40.094 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:40.094 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:40.094 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:40.094 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.094 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:40.094 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.094 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.094 19:57:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:42.636 00:21:42.636 real 0m8.643s 00:21:42.636 user 0m6.927s 00:21:42.636 sys 0m4.145s 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.636 ************************************ 00:21:42.636 END TEST nvmf_aer 00:21:42.636 ************************************ 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.636 ************************************ 00:21:42.636 START TEST nvmf_async_init 00:21:42.636 ************************************ 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:42.636 * Looking for test storage... 00:21:42.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:42.636 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=9ca199d7cc4d4cbfae83c1e866edb458 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:42.637 19:57:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:47.921 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.921 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:47.922 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:47.922 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:47.922 Found net devices under 0000:86:00.0: cvl_0_0 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:47.922 Found net devices under 0000:86:00.1: cvl_0_1 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:47.922 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.923 19:57:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:47.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:21:47.923 00:21:47.923 --- 10.0.0.2 ping statistics --- 00:21:47.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.923 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.427 ms 00:21:47.923 00:21:47.923 --- 10.0.0.1 ping statistics --- 00:21:47.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.923 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2124800 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2124800 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2124800 ']' 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.923 19:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:47.923 [2024-07-24 19:57:39.211403] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:21:47.923 [2024-07-24 19:57:39.211448] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.923 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.923 [2024-07-24 19:57:39.270118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.923 [2024-07-24 19:57:39.349274] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.923 [2024-07-24 19:57:39.349306] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.923 [2024-07-24 19:57:39.349313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.923 [2024-07-24 19:57:39.349319] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.923 [2024-07-24 19:57:39.349325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.923 [2024-07-24 19:57:39.349340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.494 [2024-07-24 19:57:40.056981] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.494 null0 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 9ca199d7cc4d4cbfae83c1e866edb458 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.494 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.755 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.755 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:48.755 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.755 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.755 [2024-07-24 19:57:40.097199] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.755 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.755 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:48.755 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.755 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.755 nvme0n1 00:21:48.755 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.755 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:48.755 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.755 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.755 [ 00:21:48.755 { 00:21:48.755 "name": "nvme0n1", 00:21:48.755 "aliases": [ 00:21:48.755 "9ca199d7-cc4d-4cbf-ae83-c1e866edb458" 00:21:48.755 ], 00:21:48.755 "product_name": "NVMe disk", 00:21:48.755 "block_size": 512, 00:21:48.755 "num_blocks": 2097152, 00:21:48.755 "uuid": "9ca199d7-cc4d-4cbf-ae83-c1e866edb458", 00:21:48.755 "assigned_rate_limits": { 00:21:48.755 "rw_ios_per_sec": 0, 00:21:48.755 "rw_mbytes_per_sec": 0, 00:21:48.755 "r_mbytes_per_sec": 0, 00:21:48.755 "w_mbytes_per_sec": 0 00:21:48.755 }, 00:21:48.755 "claimed": false, 00:21:48.755 "zoned": false, 00:21:48.755 "supported_io_types": { 00:21:48.755 "read": true, 00:21:48.755 "write": true, 00:21:48.755 "unmap": false, 00:21:48.755 "flush": true, 00:21:48.755 "reset": true, 00:21:48.755 "nvme_admin": true, 00:21:48.755 "nvme_io": true, 00:21:48.755 "nvme_io_md": false, 00:21:48.755 "write_zeroes": true, 00:21:48.755 "zcopy": false, 00:21:48.755 "get_zone_info": false, 00:21:48.755 "zone_management": false, 00:21:48.755 "zone_append": false, 00:21:48.755 "compare": true, 00:21:48.755 "compare_and_write": true, 00:21:48.755 "abort": true, 00:21:48.755 "seek_hole": false, 00:21:48.755 "seek_data": false, 00:21:48.755 "copy": true, 00:21:48.755 "nvme_iov_md": false 00:21:48.755 }, 00:21:48.755 "memory_domains": [ 00:21:48.755 { 00:21:48.755 "dma_device_id": "system", 00:21:48.755 "dma_device_type": 1 00:21:48.755 } 00:21:48.755 ], 00:21:48.755 "driver_specific": { 00:21:48.755 "nvme": [ 00:21:48.755 { 00:21:48.755 "trid": { 00:21:48.755 "trtype": "TCP", 00:21:48.755 "adrfam": "IPv4", 00:21:48.755 "traddr": "10.0.0.2", 00:21:48.755 "trsvcid": "4420", 00:21:48.755 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:48.755 }, 00:21:48.755 "ctrlr_data": { 00:21:48.755 "cntlid": 1, 00:21:48.755 "vendor_id": "0x8086", 00:21:48.755 "model_number": "SPDK bdev Controller", 00:21:48.755 "serial_number": "00000000000000000000", 00:21:48.755 "firmware_revision": "24.09", 00:21:48.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:48.755 "oacs": { 00:21:48.755 "security": 0, 00:21:48.755 "format": 0, 00:21:48.755 "firmware": 0, 00:21:48.755 "ns_manage": 0 00:21:48.755 }, 00:21:48.755 "multi_ctrlr": true, 00:21:48.755 "ana_reporting": false 00:21:48.755 }, 00:21:48.755 "vs": { 00:21:48.755 "nvme_version": "1.3" 00:21:48.755 }, 00:21:48.755 "ns_data": { 00:21:48.755 "id": 1, 00:21:48.755 "can_share": true 00:21:48.755 } 00:21:48.755 } 00:21:48.755 ], 00:21:48.755 "mp_policy": "active_passive" 00:21:48.755 } 00:21:48.755 } 00:21:48.755 ] 00:21:48.755 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.755 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:48.755 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.755 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.016 [2024-07-24 19:57:40.353725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.016 [2024-07-24 19:57:40.353780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ace3d0 (9): Bad file descriptor 00:21:49.016 [2024-07-24 19:57:40.486131] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.016 [ 00:21:49.016 { 00:21:49.016 "name": "nvme0n1", 00:21:49.016 "aliases": [ 00:21:49.016 "9ca199d7-cc4d-4cbf-ae83-c1e866edb458" 00:21:49.016 ], 00:21:49.016 "product_name": "NVMe disk", 00:21:49.016 "block_size": 512, 00:21:49.016 "num_blocks": 2097152, 00:21:49.016 "uuid": "9ca199d7-cc4d-4cbf-ae83-c1e866edb458", 00:21:49.016 "assigned_rate_limits": { 00:21:49.016 "rw_ios_per_sec": 0, 00:21:49.016 "rw_mbytes_per_sec": 0, 00:21:49.016 "r_mbytes_per_sec": 0, 00:21:49.016 "w_mbytes_per_sec": 0 00:21:49.016 }, 00:21:49.016 "claimed": false, 00:21:49.016 "zoned": false, 00:21:49.016 "supported_io_types": { 00:21:49.016 "read": true, 00:21:49.016 "write": true, 00:21:49.016 "unmap": false, 00:21:49.016 "flush": true, 00:21:49.016 "reset": true, 00:21:49.016 "nvme_admin": true, 00:21:49.016 "nvme_io": true, 00:21:49.016 "nvme_io_md": false, 00:21:49.016 "write_zeroes": true, 00:21:49.016 "zcopy": false, 00:21:49.016 "get_zone_info": false, 00:21:49.016 "zone_management": false, 00:21:49.016 "zone_append": false, 00:21:49.016 "compare": true, 00:21:49.016 "compare_and_write": true, 00:21:49.016 "abort": true, 00:21:49.016 "seek_hole": false, 00:21:49.016 "seek_data": false, 00:21:49.016 "copy": true, 00:21:49.016 "nvme_iov_md": false 00:21:49.016 }, 00:21:49.016 "memory_domains": [ 00:21:49.016 { 00:21:49.016 "dma_device_id": "system", 00:21:49.016 "dma_device_type": 1 00:21:49.016 } 00:21:49.016 ], 00:21:49.016 "driver_specific": { 00:21:49.016 "nvme": [ 00:21:49.016 { 00:21:49.016 "trid": { 00:21:49.016 "trtype": "TCP", 00:21:49.016 "adrfam": "IPv4", 00:21:49.016 "traddr": "10.0.0.2", 00:21:49.016 "trsvcid": "4420", 00:21:49.016 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:49.016 }, 00:21:49.016 "ctrlr_data": { 00:21:49.016 "cntlid": 2, 00:21:49.016 "vendor_id": "0x8086", 00:21:49.016 "model_number": "SPDK bdev Controller", 00:21:49.016 "serial_number": "00000000000000000000", 00:21:49.016 "firmware_revision": "24.09", 00:21:49.016 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.016 "oacs": { 00:21:49.016 "security": 0, 00:21:49.016 "format": 0, 00:21:49.016 "firmware": 0, 00:21:49.016 "ns_manage": 0 00:21:49.016 }, 00:21:49.016 "multi_ctrlr": true, 00:21:49.016 "ana_reporting": false 00:21:49.016 }, 00:21:49.016 "vs": { 00:21:49.016 "nvme_version": "1.3" 00:21:49.016 }, 00:21:49.016 "ns_data": { 00:21:49.016 "id": 1, 00:21:49.016 "can_share": true 00:21:49.016 } 00:21:49.016 } 00:21:49.016 ], 00:21:49.016 "mp_policy": "active_passive" 00:21:49.016 } 00:21:49.016 } 00:21:49.016 ] 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.pdG2Tr8Ha7 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.pdG2Tr8Ha7 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.016 [2024-07-24 19:57:40.550332] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:49.016 [2024-07-24 19:57:40.550435] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pdG2Tr8Ha7 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.016 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.016 [2024-07-24 19:57:40.558349] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:49.017 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.017 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pdG2Tr8Ha7 00:21:49.017 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.017 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.017 [2024-07-24 19:57:40.566385] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:49.017 [2024-07-24 19:57:40.566416] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:49.277 nvme0n1 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.277 [ 00:21:49.277 { 00:21:49.277 "name": "nvme0n1", 00:21:49.277 "aliases": [ 00:21:49.277 "9ca199d7-cc4d-4cbf-ae83-c1e866edb458" 00:21:49.277 ], 00:21:49.277 "product_name": "NVMe disk", 00:21:49.277 "block_size": 512, 00:21:49.277 "num_blocks": 2097152, 00:21:49.277 "uuid": "9ca199d7-cc4d-4cbf-ae83-c1e866edb458", 00:21:49.277 "assigned_rate_limits": { 00:21:49.277 "rw_ios_per_sec": 0, 00:21:49.277 "rw_mbytes_per_sec": 0, 00:21:49.277 "r_mbytes_per_sec": 0, 00:21:49.277 "w_mbytes_per_sec": 0 00:21:49.277 }, 00:21:49.277 "claimed": false, 00:21:49.277 "zoned": false, 00:21:49.277 "supported_io_types": { 00:21:49.277 "read": true, 00:21:49.277 "write": true, 00:21:49.277 "unmap": false, 00:21:49.277 "flush": true, 00:21:49.277 "reset": true, 00:21:49.277 "nvme_admin": true, 00:21:49.277 "nvme_io": true, 00:21:49.277 "nvme_io_md": false, 00:21:49.277 "write_zeroes": true, 00:21:49.277 "zcopy": false, 00:21:49.277 "get_zone_info": false, 00:21:49.277 "zone_management": false, 00:21:49.277 "zone_append": false, 00:21:49.277 "compare": true, 00:21:49.277 "compare_and_write": true, 00:21:49.277 "abort": true, 00:21:49.277 "seek_hole": false, 00:21:49.277 "seek_data": false, 00:21:49.277 "copy": true, 00:21:49.277 "nvme_iov_md": false 00:21:49.277 }, 00:21:49.277 "memory_domains": [ 00:21:49.277 { 00:21:49.277 "dma_device_id": "system", 00:21:49.277 "dma_device_type": 1 00:21:49.277 } 00:21:49.277 ], 00:21:49.277 "driver_specific": { 00:21:49.277 "nvme": [ 00:21:49.277 { 00:21:49.277 "trid": { 00:21:49.277 "trtype": "TCP", 00:21:49.277 "adrfam": "IPv4", 00:21:49.277 "traddr": "10.0.0.2", 00:21:49.277 "trsvcid": "4421", 00:21:49.277 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:49.277 }, 00:21:49.277 "ctrlr_data": { 00:21:49.277 "cntlid": 3, 00:21:49.277 "vendor_id": "0x8086", 00:21:49.277 "model_number": "SPDK bdev Controller", 00:21:49.277 "serial_number": "00000000000000000000", 00:21:49.277 "firmware_revision": "24.09", 00:21:49.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.277 "oacs": { 00:21:49.277 "security": 0, 00:21:49.277 "format": 0, 00:21:49.277 "firmware": 0, 00:21:49.277 "ns_manage": 0 00:21:49.277 }, 00:21:49.277 "multi_ctrlr": true, 00:21:49.277 "ana_reporting": false 00:21:49.277 }, 00:21:49.277 "vs": { 00:21:49.277 "nvme_version": "1.3" 00:21:49.277 }, 00:21:49.277 "ns_data": { 00:21:49.277 "id": 1, 00:21:49.277 "can_share": true 00:21:49.277 } 00:21:49.277 } 00:21:49.277 ], 00:21:49.277 "mp_policy": "active_passive" 00:21:49.277 } 00:21:49.277 } 00:21:49.277 ] 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.pdG2Tr8Ha7 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:49.277 rmmod nvme_tcp 00:21:49.277 rmmod nvme_fabrics 00:21:49.277 rmmod nvme_keyring 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2124800 ']' 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2124800 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2124800 ']' 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2124800 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2124800 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2124800' 00:21:49.277 killing process with pid 2124800 00:21:49.277 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2124800 00:21:49.278 [2024-07-24 19:57:40.776789] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:49.278 [2024-07-24 19:57:40.776812] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:49.278 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2124800 00:21:49.538 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:49.538 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:49.538 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:49.538 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:49.538 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:49.538 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.538 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.538 19:57:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.448 19:57:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:51.448 00:21:51.448 real 0m9.260s 00:21:51.448 user 0m3.544s 00:21:51.448 sys 0m4.274s 00:21:51.448 19:57:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:51.448 19:57:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.448 ************************************ 00:21:51.448 END TEST nvmf_async_init 00:21:51.448 ************************************ 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.708 ************************************ 00:21:51.708 START TEST dma 00:21:51.708 ************************************ 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:51.708 * Looking for test storage... 00:21:51.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.708 19:57:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:51.709 00:21:51.709 real 0m0.114s 00:21:51.709 user 0m0.053s 00:21:51.709 sys 0m0.069s 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:51.709 ************************************ 00:21:51.709 END TEST dma 00:21:51.709 ************************************ 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.709 ************************************ 00:21:51.709 START TEST nvmf_identify 00:21:51.709 ************************************ 00:21:51.709 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:51.979 * Looking for test storage... 00:21:51.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:51.979 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:51.980 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:51.980 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.980 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.980 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.980 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:51.980 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:51.980 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:51.980 19:57:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.259 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:57.260 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:57.260 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:57.260 Found net devices under 0000:86:00.0: cvl_0_0 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:57.260 Found net devices under 0000:86:00.1: cvl_0_1 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:57.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:21:57.260 00:21:57.260 --- 10.0.0.2 ping statistics --- 00:21:57.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.260 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:21:57.260 00:21:57.260 --- 10.0.0.1 ping statistics --- 00:21:57.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.260 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:57.260 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.261 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:57.261 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:57.261 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:57.261 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:57.261 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:57.261 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2128606 00:21:57.261 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:57.261 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:57.261 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2128606 00:21:57.261 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2128606 ']' 00:21:57.261 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.261 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:57.261 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.261 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:57.261 19:57:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:57.261 [2024-07-24 19:57:48.731314] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:21:57.261 [2024-07-24 19:57:48.731359] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.261 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.261 [2024-07-24 19:57:48.788547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.521 [2024-07-24 19:57:48.871174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.521 [2024-07-24 19:57:48.871208] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.521 [2024-07-24 19:57:48.871216] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.521 [2024-07-24 19:57:48.871222] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.521 [2024-07-24 19:57:48.871227] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.521 [2024-07-24 19:57:48.871271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.521 [2024-07-24 19:57:48.871367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.521 [2024-07-24 19:57:48.871445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.521 [2024-07-24 19:57:48.871446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.092 [2024-07-24 19:57:49.560418] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.092 Malloc0 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.092 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.093 [2024-07-24 19:57:49.648542] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.093 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.093 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:58.093 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.093 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.093 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.093 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:58.093 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.093 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.093 [ 00:21:58.093 { 00:21:58.093 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:58.093 "subtype": "Discovery", 00:21:58.093 "listen_addresses": [ 00:21:58.093 { 00:21:58.093 "trtype": "TCP", 00:21:58.093 "adrfam": "IPv4", 00:21:58.093 "traddr": "10.0.0.2", 00:21:58.093 "trsvcid": "4420" 00:21:58.093 } 00:21:58.093 ], 00:21:58.093 "allow_any_host": true, 00:21:58.093 "hosts": [] 00:21:58.093 }, 00:21:58.093 { 00:21:58.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.093 "subtype": "NVMe", 00:21:58.093 "listen_addresses": [ 00:21:58.093 { 00:21:58.093 "trtype": "TCP", 00:21:58.093 "adrfam": "IPv4", 00:21:58.093 "traddr": "10.0.0.2", 00:21:58.093 "trsvcid": "4420" 00:21:58.093 } 00:21:58.093 ], 00:21:58.093 "allow_any_host": true, 00:21:58.093 "hosts": [], 00:21:58.093 "serial_number": "SPDK00000000000001", 00:21:58.093 "model_number": "SPDK bdev Controller", 00:21:58.093 "max_namespaces": 32, 00:21:58.093 "min_cntlid": 1, 00:21:58.093 "max_cntlid": 65519, 00:21:58.093 "namespaces": [ 00:21:58.093 { 00:21:58.093 "nsid": 1, 00:21:58.093 "bdev_name": "Malloc0", 00:21:58.093 "name": "Malloc0", 00:21:58.093 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:58.093 "eui64": "ABCDEF0123456789", 00:21:58.093 "uuid": "e3703dc4-a7c6-4931-a60d-c695b9852be9" 00:21:58.093 } 00:21:58.093 ] 00:21:58.093 } 00:21:58.093 ] 00:21:58.093 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.093 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:58.356 [2024-07-24 19:57:49.700337] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:21:58.356 [2024-07-24 19:57:49.700372] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2128799 ] 00:21:58.356 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.356 [2024-07-24 19:57:49.728509] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:58.356 [2024-07-24 19:57:49.728550] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:58.356 [2024-07-24 19:57:49.728554] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:58.356 [2024-07-24 19:57:49.728565] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:58.356 [2024-07-24 19:57:49.728572] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:58.356 [2024-07-24 19:57:49.729080] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:58.356 [2024-07-24 19:57:49.729104] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x682ec0 0 00:21:58.356 [2024-07-24 19:57:49.747049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:58.356 [2024-07-24 19:57:49.747069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:58.356 [2024-07-24 19:57:49.747074] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:58.356 [2024-07-24 19:57:49.747077] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:58.356 [2024-07-24 19:57:49.747112] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.356 [2024-07-24 19:57:49.747118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.356 [2024-07-24 19:57:49.747122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x682ec0) 00:21:58.356 [2024-07-24 19:57:49.747132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:58.356 [2024-07-24 19:57:49.747148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x705e40, cid 0, qid 0 00:21:58.356 [2024-07-24 19:57:49.755054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.356 [2024-07-24 19:57:49.755063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.356 [2024-07-24 19:57:49.755066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.356 [2024-07-24 19:57:49.755070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x705e40) on tqpair=0x682ec0 00:21:58.356 [2024-07-24 19:57:49.755079] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:58.356 [2024-07-24 19:57:49.755084] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:58.356 [2024-07-24 19:57:49.755089] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:58.356 [2024-07-24 19:57:49.755101] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.356 [2024-07-24 19:57:49.755105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.356 [2024-07-24 19:57:49.755108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x682ec0) 00:21:58.356 [2024-07-24 19:57:49.755115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.356 [2024-07-24 19:57:49.755127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x705e40, cid 0, qid 0 00:21:58.356 [2024-07-24 19:57:49.755298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.356 [2024-07-24 19:57:49.755313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.356 [2024-07-24 19:57:49.755316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.356 [2024-07-24 19:57:49.755320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x705e40) on tqpair=0x682ec0 00:21:58.356 [2024-07-24 19:57:49.755329] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:58.356 [2024-07-24 19:57:49.755337] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:58.356 [2024-07-24 19:57:49.755344] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.356 [2024-07-24 19:57:49.755348] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.356 [2024-07-24 19:57:49.755351] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x682ec0) 00:21:58.356 [2024-07-24 19:57:49.755358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.356 [2024-07-24 19:57:49.755371] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x705e40, cid 0, qid 0 00:21:58.356 [2024-07-24 19:57:49.755514] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.356 [2024-07-24 19:57:49.755525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.356 [2024-07-24 19:57:49.755528] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.356 [2024-07-24 19:57:49.755532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x705e40) on tqpair=0x682ec0 00:21:58.356 [2024-07-24 19:57:49.755537] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:58.356 [2024-07-24 19:57:49.755546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:58.356 [2024-07-24 19:57:49.755553] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.356 [2024-07-24 19:57:49.755557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.356 [2024-07-24 19:57:49.755560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x682ec0) 00:21:58.356 [2024-07-24 19:57:49.755567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.356 [2024-07-24 19:57:49.755580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x705e40, cid 0, qid 0 00:21:58.356 [2024-07-24 19:57:49.755732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.356 [2024-07-24 19:57:49.755741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.356 [2024-07-24 19:57:49.755744] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.356 [2024-07-24 19:57:49.755748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x705e40) on tqpair=0x682ec0 00:21:58.356 [2024-07-24 19:57:49.755753] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:58.356 [2024-07-24 19:57:49.755763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.356 [2024-07-24 19:57:49.755767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.356 [2024-07-24 19:57:49.755771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x682ec0) 00:21:58.356 [2024-07-24 19:57:49.755778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.356 [2024-07-24 19:57:49.755789] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x705e40, cid 0, qid 0 00:21:58.356 [2024-07-24 19:57:49.755930] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.356 [2024-07-24 19:57:49.755940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.356 [2024-07-24 19:57:49.755943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.356 [2024-07-24 19:57:49.755947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x705e40) on tqpair=0x682ec0 00:21:58.356 [2024-07-24 19:57:49.755954] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:58.356 [2024-07-24 19:57:49.755959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:58.356 [2024-07-24 19:57:49.755966] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:58.357 [2024-07-24 19:57:49.756071] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:58.357 [2024-07-24 19:57:49.756076] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:58.357 [2024-07-24 19:57:49.756084] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.756088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.756091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x682ec0) 00:21:58.357 [2024-07-24 19:57:49.756098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.357 [2024-07-24 19:57:49.756110] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x705e40, cid 0, qid 0 00:21:58.357 [2024-07-24 19:57:49.756257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.357 [2024-07-24 19:57:49.756267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.357 [2024-07-24 19:57:49.756270] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.756273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x705e40) on tqpair=0x682ec0 00:21:58.357 [2024-07-24 19:57:49.756278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:58.357 [2024-07-24 19:57:49.756288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.756292] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.756295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x682ec0) 00:21:58.357 [2024-07-24 19:57:49.756302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.357 [2024-07-24 19:57:49.756314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x705e40, cid 0, qid 0 00:21:58.357 [2024-07-24 19:57:49.756458] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.357 [2024-07-24 19:57:49.756467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.357 [2024-07-24 19:57:49.756470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.756474] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x705e40) on tqpair=0x682ec0 00:21:58.357 [2024-07-24 19:57:49.756478] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:58.357 [2024-07-24 19:57:49.756482] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:58.357 [2024-07-24 19:57:49.756490] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:58.357 [2024-07-24 19:57:49.756499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:58.357 [2024-07-24 19:57:49.756508] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.756511] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x682ec0) 00:21:58.357 [2024-07-24 19:57:49.756518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.357 [2024-07-24 19:57:49.756533] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x705e40, cid 0, qid 0 00:21:58.357 [2024-07-24 19:57:49.756706] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.357 [2024-07-24 19:57:49.756716] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.357 [2024-07-24 19:57:49.756719] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.756722] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x682ec0): datao=0, datal=4096, cccid=0 00:21:58.357 [2024-07-24 19:57:49.756726] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x705e40) on tqpair(0x682ec0): expected_datao=0, payload_size=4096 00:21:58.357 [2024-07-24 19:57:49.756730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.756967] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.756972] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.798050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.357 [2024-07-24 19:57:49.798061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.357 [2024-07-24 19:57:49.798064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.798068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x705e40) on tqpair=0x682ec0 00:21:58.357 [2024-07-24 19:57:49.798074] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:58.357 [2024-07-24 19:57:49.798079] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:58.357 [2024-07-24 19:57:49.798083] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:58.357 [2024-07-24 19:57:49.798088] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:58.357 [2024-07-24 19:57:49.798092] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:58.357 [2024-07-24 19:57:49.798096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:58.357 [2024-07-24 19:57:49.798104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:58.357 [2024-07-24 19:57:49.798114] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.798118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.798121] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x682ec0) 00:21:58.357 [2024-07-24 19:57:49.798128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:58.357 [2024-07-24 19:57:49.798141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x705e40, cid 0, qid 0 00:21:58.357 [2024-07-24 19:57:49.798287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.357 [2024-07-24 19:57:49.798297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.357 [2024-07-24 19:57:49.798300] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.798304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x705e40) on tqpair=0x682ec0 00:21:58.357 [2024-07-24 19:57:49.798311] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.798314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.798317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x682ec0) 00:21:58.357 [2024-07-24 19:57:49.798324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.357 [2024-07-24 19:57:49.798329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.798335] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.798339] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x682ec0) 00:21:58.357 [2024-07-24 19:57:49.798343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.357 [2024-07-24 19:57:49.798348] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.798352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.798355] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x682ec0) 00:21:58.357 [2024-07-24 19:57:49.798360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.357 [2024-07-24 19:57:49.798364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.798368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.798371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x682ec0) 00:21:58.357 [2024-07-24 19:57:49.798376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.357 [2024-07-24 19:57:49.798380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:58.357 [2024-07-24 19:57:49.798391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:58.357 [2024-07-24 19:57:49.798397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.798401] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x682ec0) 00:21:58.357 [2024-07-24 19:57:49.798406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.357 [2024-07-24 19:57:49.798420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x705e40, cid 0, qid 0 00:21:58.357 [2024-07-24 19:57:49.798425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x705fc0, cid 1, qid 0 00:21:58.357 [2024-07-24 19:57:49.798429] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706140, cid 2, qid 0 00:21:58.357 [2024-07-24 19:57:49.798433] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7062c0, cid 3, qid 0 00:21:58.357 [2024-07-24 19:57:49.798437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706440, cid 4, qid 0 00:21:58.357 [2024-07-24 19:57:49.798617] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.357 [2024-07-24 19:57:49.798627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.357 [2024-07-24 19:57:49.798630] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.798634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x706440) on tqpair=0x682ec0 00:21:58.357 [2024-07-24 19:57:49.798639] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:58.357 [2024-07-24 19:57:49.798644] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:58.357 [2024-07-24 19:57:49.798656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.357 [2024-07-24 19:57:49.798660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x682ec0) 00:21:58.357 [2024-07-24 19:57:49.798666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.358 [2024-07-24 19:57:49.798678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706440, cid 4, qid 0 00:21:58.358 [2024-07-24 19:57:49.798916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.358 [2024-07-24 19:57:49.798927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.358 [2024-07-24 19:57:49.798933] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.798936] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x682ec0): datao=0, datal=4096, cccid=4 00:21:58.358 [2024-07-24 19:57:49.798940] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x706440) on tqpair(0x682ec0): expected_datao=0, payload_size=4096 00:21:58.358 [2024-07-24 19:57:49.798944] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.798950] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.798954] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.799213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.358 [2024-07-24 19:57:49.799219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.358 [2024-07-24 19:57:49.799222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.799225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x706440) on tqpair=0x682ec0 00:21:58.358 [2024-07-24 19:57:49.799236] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:58.358 [2024-07-24 19:57:49.799257] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.799261] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x682ec0) 00:21:58.358 [2024-07-24 19:57:49.799267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.358 [2024-07-24 19:57:49.799273] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.799277] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.799280] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x682ec0) 00:21:58.358 [2024-07-24 19:57:49.799285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.358 [2024-07-24 19:57:49.799300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706440, cid 4, qid 0 00:21:58.358 [2024-07-24 19:57:49.799305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7065c0, cid 5, qid 0 00:21:58.358 [2024-07-24 19:57:49.799476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.358 [2024-07-24 19:57:49.799486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.358 [2024-07-24 19:57:49.799490] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.799493] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x682ec0): datao=0, datal=1024, cccid=4 00:21:58.358 [2024-07-24 19:57:49.799497] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x706440) on tqpair(0x682ec0): expected_datao=0, payload_size=1024 00:21:58.358 [2024-07-24 19:57:49.799501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.799507] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.799510] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.799515] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.358 [2024-07-24 19:57:49.799520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.358 [2024-07-24 19:57:49.799523] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.799527] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7065c0) on tqpair=0x682ec0 00:21:58.358 [2024-07-24 19:57:49.840275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.358 [2024-07-24 19:57:49.840290] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.358 [2024-07-24 19:57:49.840293] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.840296] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x706440) on tqpair=0x682ec0 00:21:58.358 [2024-07-24 19:57:49.840318] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.840322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x682ec0) 00:21:58.358 [2024-07-24 19:57:49.840329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.358 [2024-07-24 19:57:49.840346] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706440, cid 4, qid 0 00:21:58.358 [2024-07-24 19:57:49.840501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.358 [2024-07-24 19:57:49.840511] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.358 [2024-07-24 19:57:49.840514] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.840518] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x682ec0): datao=0, datal=3072, cccid=4 00:21:58.358 [2024-07-24 19:57:49.840522] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x706440) on tqpair(0x682ec0): expected_datao=0, payload_size=3072 00:21:58.358 [2024-07-24 19:57:49.840525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.840774] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.840778] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.884054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.358 [2024-07-24 19:57:49.884064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.358 [2024-07-24 19:57:49.884067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.884071] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x706440) on tqpair=0x682ec0 00:21:58.358 [2024-07-24 19:57:49.884080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.884084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x682ec0) 00:21:58.358 [2024-07-24 19:57:49.884091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.358 [2024-07-24 19:57:49.884106] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x706440, cid 4, qid 0 00:21:58.358 [2024-07-24 19:57:49.884360] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.358 [2024-07-24 19:57:49.884370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.358 [2024-07-24 19:57:49.884373] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.884377] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x682ec0): datao=0, datal=8, cccid=4 00:21:58.358 [2024-07-24 19:57:49.884380] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x706440) on tqpair(0x682ec0): expected_datao=0, payload_size=8 00:21:58.358 [2024-07-24 19:57:49.884384] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.884390] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.884394] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.925267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.358 [2024-07-24 19:57:49.925281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.358 [2024-07-24 19:57:49.925285] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.358 [2024-07-24 19:57:49.925289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x706440) on tqpair=0x682ec0 00:21:58.358 ===================================================== 00:21:58.358 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:58.358 ===================================================== 00:21:58.358 Controller Capabilities/Features 00:21:58.358 ================================ 00:21:58.358 Vendor ID: 0000 00:21:58.358 Subsystem Vendor ID: 0000 00:21:58.358 Serial Number: .................... 00:21:58.358 Model Number: ........................................ 00:21:58.358 Firmware Version: 24.09 00:21:58.358 Recommended Arb Burst: 0 00:21:58.358 IEEE OUI Identifier: 00 00 00 00:21:58.358 Multi-path I/O 00:21:58.358 May have multiple subsystem ports: No 00:21:58.358 May have multiple controllers: No 00:21:58.358 Associated with SR-IOV VF: No 00:21:58.358 Max Data Transfer Size: 131072 00:21:58.358 Max Number of Namespaces: 0 00:21:58.358 Max Number of I/O Queues: 1024 00:21:58.358 NVMe Specification Version (VS): 1.3 00:21:58.358 NVMe Specification Version (Identify): 1.3 00:21:58.358 Maximum Queue Entries: 128 00:21:58.358 Contiguous Queues Required: Yes 00:21:58.358 Arbitration Mechanisms Supported 00:21:58.358 Weighted Round Robin: Not Supported 00:21:58.358 Vendor Specific: Not Supported 00:21:58.358 Reset Timeout: 15000 ms 00:21:58.358 Doorbell Stride: 4 bytes 00:21:58.358 NVM Subsystem Reset: Not Supported 00:21:58.358 Command Sets Supported 00:21:58.358 NVM Command Set: Supported 00:21:58.358 Boot Partition: Not Supported 00:21:58.358 Memory Page Size Minimum: 4096 bytes 00:21:58.358 Memory Page Size Maximum: 4096 bytes 00:21:58.358 Persistent Memory Region: Not Supported 00:21:58.358 Optional Asynchronous Events Supported 00:21:58.358 Namespace Attribute Notices: Not Supported 00:21:58.358 Firmware Activation Notices: Not Supported 00:21:58.358 ANA Change Notices: Not Supported 00:21:58.358 PLE Aggregate Log Change Notices: Not Supported 00:21:58.358 LBA Status Info Alert Notices: Not Supported 00:21:58.358 EGE Aggregate Log Change Notices: Not Supported 00:21:58.358 Normal NVM Subsystem Shutdown event: Not Supported 00:21:58.358 Zone Descriptor Change Notices: Not Supported 00:21:58.358 Discovery Log Change Notices: Supported 00:21:58.358 Controller Attributes 00:21:58.358 128-bit Host Identifier: Not Supported 00:21:58.358 Non-Operational Permissive Mode: Not Supported 00:21:58.358 NVM Sets: Not Supported 00:21:58.358 Read Recovery Levels: Not Supported 00:21:58.359 Endurance Groups: Not Supported 00:21:58.359 Predictable Latency Mode: Not Supported 00:21:58.359 Traffic Based Keep ALive: Not Supported 00:21:58.359 Namespace Granularity: Not Supported 00:21:58.359 SQ Associations: Not Supported 00:21:58.359 UUID List: Not Supported 00:21:58.359 Multi-Domain Subsystem: Not Supported 00:21:58.359 Fixed Capacity Management: Not Supported 00:21:58.359 Variable Capacity Management: Not Supported 00:21:58.359 Delete Endurance Group: Not Supported 00:21:58.359 Delete NVM Set: Not Supported 00:21:58.359 Extended LBA Formats Supported: Not Supported 00:21:58.359 Flexible Data Placement Supported: Not Supported 00:21:58.359 00:21:58.359 Controller Memory Buffer Support 00:21:58.359 ================================ 00:21:58.359 Supported: No 00:21:58.359 00:21:58.359 Persistent Memory Region Support 00:21:58.359 ================================ 00:21:58.359 Supported: No 00:21:58.359 00:21:58.359 Admin Command Set Attributes 00:21:58.359 ============================ 00:21:58.359 Security Send/Receive: Not Supported 00:21:58.359 Format NVM: Not Supported 00:21:58.359 Firmware Activate/Download: Not Supported 00:21:58.359 Namespace Management: Not Supported 00:21:58.359 Device Self-Test: Not Supported 00:21:58.359 Directives: Not Supported 00:21:58.359 NVMe-MI: Not Supported 00:21:58.359 Virtualization Management: Not Supported 00:21:58.359 Doorbell Buffer Config: Not Supported 00:21:58.359 Get LBA Status Capability: Not Supported 00:21:58.359 Command & Feature Lockdown Capability: Not Supported 00:21:58.359 Abort Command Limit: 1 00:21:58.359 Async Event Request Limit: 4 00:21:58.359 Number of Firmware Slots: N/A 00:21:58.359 Firmware Slot 1 Read-Only: N/A 00:21:58.359 Firmware Activation Without Reset: N/A 00:21:58.359 Multiple Update Detection Support: N/A 00:21:58.359 Firmware Update Granularity: No Information Provided 00:21:58.359 Per-Namespace SMART Log: No 00:21:58.359 Asymmetric Namespace Access Log Page: Not Supported 00:21:58.359 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:58.359 Command Effects Log Page: Not Supported 00:21:58.359 Get Log Page Extended Data: Supported 00:21:58.359 Telemetry Log Pages: Not Supported 00:21:58.359 Persistent Event Log Pages: Not Supported 00:21:58.359 Supported Log Pages Log Page: May Support 00:21:58.359 Commands Supported & Effects Log Page: Not Supported 00:21:58.359 Feature Identifiers & Effects Log Page:May Support 00:21:58.359 NVMe-MI Commands & Effects Log Page: May Support 00:21:58.359 Data Area 4 for Telemetry Log: Not Supported 00:21:58.359 Error Log Page Entries Supported: 128 00:21:58.359 Keep Alive: Not Supported 00:21:58.359 00:21:58.359 NVM Command Set Attributes 00:21:58.359 ========================== 00:21:58.359 Submission Queue Entry Size 00:21:58.359 Max: 1 00:21:58.359 Min: 1 00:21:58.359 Completion Queue Entry Size 00:21:58.359 Max: 1 00:21:58.359 Min: 1 00:21:58.359 Number of Namespaces: 0 00:21:58.359 Compare Command: Not Supported 00:21:58.359 Write Uncorrectable Command: Not Supported 00:21:58.359 Dataset Management Command: Not Supported 00:21:58.359 Write Zeroes Command: Not Supported 00:21:58.359 Set Features Save Field: Not Supported 00:21:58.359 Reservations: Not Supported 00:21:58.359 Timestamp: Not Supported 00:21:58.359 Copy: Not Supported 00:21:58.359 Volatile Write Cache: Not Present 00:21:58.359 Atomic Write Unit (Normal): 1 00:21:58.359 Atomic Write Unit (PFail): 1 00:21:58.359 Atomic Compare & Write Unit: 1 00:21:58.359 Fused Compare & Write: Supported 00:21:58.359 Scatter-Gather List 00:21:58.359 SGL Command Set: Supported 00:21:58.359 SGL Keyed: Supported 00:21:58.359 SGL Bit Bucket Descriptor: Not Supported 00:21:58.359 SGL Metadata Pointer: Not Supported 00:21:58.359 Oversized SGL: Not Supported 00:21:58.359 SGL Metadata Address: Not Supported 00:21:58.359 SGL Offset: Supported 00:21:58.359 Transport SGL Data Block: Not Supported 00:21:58.359 Replay Protected Memory Block: Not Supported 00:21:58.359 00:21:58.359 Firmware Slot Information 00:21:58.359 ========================= 00:21:58.359 Active slot: 0 00:21:58.359 00:21:58.359 00:21:58.359 Error Log 00:21:58.359 ========= 00:21:58.359 00:21:58.359 Active Namespaces 00:21:58.359 ================= 00:21:58.359 Discovery Log Page 00:21:58.359 ================== 00:21:58.359 Generation Counter: 2 00:21:58.359 Number of Records: 2 00:21:58.359 Record Format: 0 00:21:58.359 00:21:58.359 Discovery Log Entry 0 00:21:58.359 ---------------------- 00:21:58.359 Transport Type: 3 (TCP) 00:21:58.359 Address Family: 1 (IPv4) 00:21:58.359 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:58.359 Entry Flags: 00:21:58.359 Duplicate Returned Information: 1 00:21:58.359 Explicit Persistent Connection Support for Discovery: 1 00:21:58.359 Transport Requirements: 00:21:58.359 Secure Channel: Not Required 00:21:58.359 Port ID: 0 (0x0000) 00:21:58.359 Controller ID: 65535 (0xffff) 00:21:58.359 Admin Max SQ Size: 128 00:21:58.359 Transport Service Identifier: 4420 00:21:58.359 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:58.359 Transport Address: 10.0.0.2 00:21:58.359 Discovery Log Entry 1 00:21:58.359 ---------------------- 00:21:58.359 Transport Type: 3 (TCP) 00:21:58.359 Address Family: 1 (IPv4) 00:21:58.359 Subsystem Type: 2 (NVM Subsystem) 00:21:58.359 Entry Flags: 00:21:58.359 Duplicate Returned Information: 0 00:21:58.359 Explicit Persistent Connection Support for Discovery: 0 00:21:58.359 Transport Requirements: 00:21:58.359 Secure Channel: Not Required 00:21:58.359 Port ID: 0 (0x0000) 00:21:58.359 Controller ID: 65535 (0xffff) 00:21:58.359 Admin Max SQ Size: 128 00:21:58.359 Transport Service Identifier: 4420 00:21:58.359 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:58.359 Transport Address: 10.0.0.2 [2024-07-24 19:57:49.925366] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:58.359 [2024-07-24 19:57:49.925375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x705e40) on tqpair=0x682ec0 00:21:58.359 [2024-07-24 19:57:49.925381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.359 [2024-07-24 19:57:49.925386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x705fc0) on tqpair=0x682ec0 00:21:58.359 [2024-07-24 19:57:49.925391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.359 [2024-07-24 19:57:49.925396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x706140) on tqpair=0x682ec0 00:21:58.359 [2024-07-24 19:57:49.925400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.359 [2024-07-24 19:57:49.925404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7062c0) on tqpair=0x682ec0 00:21:58.359 [2024-07-24 19:57:49.925408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.359 [2024-07-24 19:57:49.925418] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.359 [2024-07-24 19:57:49.925421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.359 [2024-07-24 19:57:49.925425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x682ec0) 00:21:58.359 [2024-07-24 19:57:49.925432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.359 [2024-07-24 19:57:49.925446] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7062c0, cid 3, qid 0 00:21:58.359 [2024-07-24 19:57:49.925587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.359 [2024-07-24 19:57:49.925597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.359 [2024-07-24 19:57:49.925600] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.359 [2024-07-24 19:57:49.925604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7062c0) on tqpair=0x682ec0 00:21:58.359 [2024-07-24 19:57:49.925611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.359 [2024-07-24 19:57:49.925615] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.359 [2024-07-24 19:57:49.925618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x682ec0) 00:21:58.359 [2024-07-24 19:57:49.925624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.359 [2024-07-24 19:57:49.925640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7062c0, cid 3, qid 0 00:21:58.359 [2024-07-24 19:57:49.925800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.359 [2024-07-24 19:57:49.925810] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.359 [2024-07-24 19:57:49.925813] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.359 [2024-07-24 19:57:49.925816] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7062c0) on tqpair=0x682ec0 00:21:58.360 [2024-07-24 19:57:49.925821] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:58.360 [2024-07-24 19:57:49.925825] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:58.360 [2024-07-24 19:57:49.925836] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.360 [2024-07-24 19:57:49.925839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.360 [2024-07-24 19:57:49.925843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x682ec0) 00:21:58.360 [2024-07-24 19:57:49.925849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.360 [2024-07-24 19:57:49.925861] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7062c0, cid 3, qid 0 00:21:58.360 [2024-07-24 19:57:49.926005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.360 [2024-07-24 19:57:49.926015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.360 [2024-07-24 19:57:49.926018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.360 [2024-07-24 19:57:49.926021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7062c0) on tqpair=0x682ec0 00:21:58.360 [2024-07-24 19:57:49.926032] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.360 [2024-07-24 19:57:49.926039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.360 [2024-07-24 19:57:49.930048] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x682ec0) 00:21:58.360 [2024-07-24 19:57:49.930055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.360 [2024-07-24 19:57:49.930069] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7062c0, cid 3, qid 0 00:21:58.360 [2024-07-24 19:57:49.930308] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.360 [2024-07-24 19:57:49.930318] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.360 [2024-07-24 19:57:49.930321] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.360 [2024-07-24 19:57:49.930325] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7062c0) on tqpair=0x682ec0 00:21:58.360 [2024-07-24 19:57:49.930333] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:21:58.360 00:21:58.360 19:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:58.684 [2024-07-24 19:57:49.966826] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:21:58.684 [2024-07-24 19:57:49.966862] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2128858 ] 00:21:58.684 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.684 [2024-07-24 19:57:49.996313] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:58.684 [2024-07-24 19:57:49.996353] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:58.684 [2024-07-24 19:57:49.996358] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:58.684 [2024-07-24 19:57:49.996369] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:58.684 [2024-07-24 19:57:49.996377] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:58.684 [2024-07-24 19:57:49.996940] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:58.684 [2024-07-24 19:57:49.996960] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b91ec0 0 00:21:58.684 [2024-07-24 19:57:50.010051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:58.684 [2024-07-24 19:57:50.010071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:58.684 [2024-07-24 19:57:50.010075] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:58.684 [2024-07-24 19:57:50.010078] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:58.684 [2024-07-24 19:57:50.010115] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.684 [2024-07-24 19:57:50.010120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.684 [2024-07-24 19:57:50.010124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b91ec0) 00:21:58.684 [2024-07-24 19:57:50.010135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:58.684 [2024-07-24 19:57:50.010150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c14e40, cid 0, qid 0 00:21:58.684 [2024-07-24 19:57:50.017051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.684 [2024-07-24 19:57:50.017059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.684 [2024-07-24 19:57:50.017065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.684 [2024-07-24 19:57:50.017069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c14e40) on tqpair=0x1b91ec0 00:21:58.684 [2024-07-24 19:57:50.017079] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:58.684 [2024-07-24 19:57:50.017086] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:58.684 [2024-07-24 19:57:50.017091] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:58.684 [2024-07-24 19:57:50.017102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.684 [2024-07-24 19:57:50.017107] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.684 [2024-07-24 19:57:50.017110] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b91ec0) 00:21:58.684 [2024-07-24 19:57:50.017116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.684 [2024-07-24 19:57:50.017129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c14e40, cid 0, qid 0 00:21:58.684 [2024-07-24 19:57:50.017340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.684 [2024-07-24 19:57:50.017353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.684 [2024-07-24 19:57:50.017357] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.684 [2024-07-24 19:57:50.017361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c14e40) on tqpair=0x1b91ec0 00:21:58.684 [2024-07-24 19:57:50.017370] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:58.684 [2024-07-24 19:57:50.017380] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:58.684 [2024-07-24 19:57:50.017389] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.684 [2024-07-24 19:57:50.017393] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.684 [2024-07-24 19:57:50.017396] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b91ec0) 00:21:58.684 [2024-07-24 19:57:50.017404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.684 [2024-07-24 19:57:50.017417] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c14e40, cid 0, qid 0 00:21:58.684 [2024-07-24 19:57:50.017559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.684 [2024-07-24 19:57:50.017569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.684 [2024-07-24 19:57:50.017572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.684 [2024-07-24 19:57:50.017576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c14e40) on tqpair=0x1b91ec0 00:21:58.684 [2024-07-24 19:57:50.017581] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:58.684 [2024-07-24 19:57:50.017591] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:58.684 [2024-07-24 19:57:50.017598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.684 [2024-07-24 19:57:50.017602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.684 [2024-07-24 19:57:50.017606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b91ec0) 00:21:58.684 [2024-07-24 19:57:50.017612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.684 [2024-07-24 19:57:50.017625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c14e40, cid 0, qid 0 00:21:58.684 [2024-07-24 19:57:50.017767] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.684 [2024-07-24 19:57:50.017777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.684 [2024-07-24 19:57:50.017780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.017788] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c14e40) on tqpair=0x1b91ec0 00:21:58.685 [2024-07-24 19:57:50.017793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:58.685 [2024-07-24 19:57:50.017803] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.017807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.017811] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b91ec0) 00:21:58.685 [2024-07-24 19:57:50.017817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.685 [2024-07-24 19:57:50.017830] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c14e40, cid 0, qid 0 00:21:58.685 [2024-07-24 19:57:50.017973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.685 [2024-07-24 19:57:50.017983] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.685 [2024-07-24 19:57:50.017986] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.017989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c14e40) on tqpair=0x1b91ec0 00:21:58.685 [2024-07-24 19:57:50.017994] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:58.685 [2024-07-24 19:57:50.017999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:58.685 [2024-07-24 19:57:50.018008] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:58.685 [2024-07-24 19:57:50.018113] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:58.685 [2024-07-24 19:57:50.018117] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:58.685 [2024-07-24 19:57:50.018126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.018130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.018133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b91ec0) 00:21:58.685 [2024-07-24 19:57:50.018139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.685 [2024-07-24 19:57:50.018152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c14e40, cid 0, qid 0 00:21:58.685 [2024-07-24 19:57:50.018298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.685 [2024-07-24 19:57:50.018308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.685 [2024-07-24 19:57:50.018311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.018315] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c14e40) on tqpair=0x1b91ec0 00:21:58.685 [2024-07-24 19:57:50.018320] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:58.685 [2024-07-24 19:57:50.018330] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.018334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.018337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b91ec0) 00:21:58.685 [2024-07-24 19:57:50.018344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.685 [2024-07-24 19:57:50.018356] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c14e40, cid 0, qid 0 00:21:58.685 [2024-07-24 19:57:50.018501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.685 [2024-07-24 19:57:50.018510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.685 [2024-07-24 19:57:50.018514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.018520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c14e40) on tqpair=0x1b91ec0 00:21:58.685 [2024-07-24 19:57:50.018524] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:58.685 [2024-07-24 19:57:50.018529] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:58.685 [2024-07-24 19:57:50.018537] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:58.685 [2024-07-24 19:57:50.018545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:58.685 [2024-07-24 19:57:50.018554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.018557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b91ec0) 00:21:58.685 [2024-07-24 19:57:50.018564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.685 [2024-07-24 19:57:50.018577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c14e40, cid 0, qid 0 00:21:58.685 [2024-07-24 19:57:50.018760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.685 [2024-07-24 19:57:50.018771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.685 [2024-07-24 19:57:50.018774] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.018777] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b91ec0): datao=0, datal=4096, cccid=0 00:21:58.685 [2024-07-24 19:57:50.018781] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c14e40) on tqpair(0x1b91ec0): expected_datao=0, payload_size=4096 00:21:58.685 [2024-07-24 19:57:50.018786] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.018792] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.018796] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.019068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.685 [2024-07-24 19:57:50.019074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.685 [2024-07-24 19:57:50.019077] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.019081] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c14e40) on tqpair=0x1b91ec0 00:21:58.685 [2024-07-24 19:57:50.019087] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:58.685 [2024-07-24 19:57:50.019091] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:58.685 [2024-07-24 19:57:50.019096] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:58.685 [2024-07-24 19:57:50.019099] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:58.685 [2024-07-24 19:57:50.019103] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:58.685 [2024-07-24 19:57:50.019108] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:58.685 [2024-07-24 19:57:50.019116] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:58.685 [2024-07-24 19:57:50.019125] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.019129] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.685 [2024-07-24 19:57:50.019132] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b91ec0) 00:21:58.686 [2024-07-24 19:57:50.019139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:58.686 [2024-07-24 19:57:50.019154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c14e40, cid 0, qid 0 00:21:58.686 [2024-07-24 19:57:50.019304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.686 [2024-07-24 19:57:50.019313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.686 [2024-07-24 19:57:50.019317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.019321] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c14e40) on tqpair=0x1b91ec0 00:21:58.686 [2024-07-24 19:57:50.019327] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.019331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.019334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b91ec0) 00:21:58.686 [2024-07-24 19:57:50.019340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.686 [2024-07-24 19:57:50.019345] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.019349] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.019352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b91ec0) 00:21:58.686 [2024-07-24 19:57:50.019356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.686 [2024-07-24 19:57:50.019361] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.019365] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.019368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b91ec0) 00:21:58.686 [2024-07-24 19:57:50.019372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.686 [2024-07-24 19:57:50.019378] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.019381] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.019384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.686 [2024-07-24 19:57:50.019389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.686 [2024-07-24 19:57:50.019393] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:58.686 [2024-07-24 19:57:50.019406] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:58.686 [2024-07-24 19:57:50.019412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.019415] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b91ec0) 00:21:58.686 [2024-07-24 19:57:50.019420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.686 [2024-07-24 19:57:50.019434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c14e40, cid 0, qid 0 00:21:58.686 [2024-07-24 19:57:50.019439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c14fc0, cid 1, qid 0 00:21:58.686 [2024-07-24 19:57:50.019442] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c15140, cid 2, qid 0 00:21:58.686 [2024-07-24 19:57:50.019447] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.686 [2024-07-24 19:57:50.019451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c15440, cid 4, qid 0 00:21:58.686 [2024-07-24 19:57:50.019627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.686 [2024-07-24 19:57:50.019637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.686 [2024-07-24 19:57:50.019640] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.019646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c15440) on tqpair=0x1b91ec0 00:21:58.686 [2024-07-24 19:57:50.019651] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:58.686 [2024-07-24 19:57:50.019656] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:58.686 [2024-07-24 19:57:50.019667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:58.686 [2024-07-24 19:57:50.019674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:58.686 [2024-07-24 19:57:50.019680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.019683] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.019687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b91ec0) 00:21:58.686 [2024-07-24 19:57:50.019693] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:58.686 [2024-07-24 19:57:50.019706] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c15440, cid 4, qid 0 00:21:58.686 [2024-07-24 19:57:50.019848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.686 [2024-07-24 19:57:50.019858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.686 [2024-07-24 19:57:50.019861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.019865] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c15440) on tqpair=0x1b91ec0 00:21:58.686 [2024-07-24 19:57:50.019919] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:58.686 [2024-07-24 19:57:50.019930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:58.686 [2024-07-24 19:57:50.019938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.019941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b91ec0) 00:21:58.686 [2024-07-24 19:57:50.019947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.686 [2024-07-24 19:57:50.019960] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c15440, cid 4, qid 0 00:21:58.686 [2024-07-24 19:57:50.020125] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.686 [2024-07-24 19:57:50.020136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.686 [2024-07-24 19:57:50.020140] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.020143] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b91ec0): datao=0, datal=4096, cccid=4 00:21:58.686 [2024-07-24 19:57:50.020147] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c15440) on tqpair(0x1b91ec0): expected_datao=0, payload_size=4096 00:21:58.686 [2024-07-24 19:57:50.020151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.020380] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.020384] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.064052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.686 [2024-07-24 19:57:50.064068] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.686 [2024-07-24 19:57:50.064072] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.064076] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c15440) on tqpair=0x1b91ec0 00:21:58.686 [2024-07-24 19:57:50.064087] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:58.686 [2024-07-24 19:57:50.064106] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:58.686 [2024-07-24 19:57:50.064115] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:58.686 [2024-07-24 19:57:50.064122] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.686 [2024-07-24 19:57:50.064126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b91ec0) 00:21:58.686 [2024-07-24 19:57:50.064134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.687 [2024-07-24 19:57:50.064148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c15440, cid 4, qid 0 00:21:58.687 [2024-07-24 19:57:50.064396] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.687 [2024-07-24 19:57:50.064407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.687 [2024-07-24 19:57:50.064410] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.064413] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b91ec0): datao=0, datal=4096, cccid=4 00:21:58.687 [2024-07-24 19:57:50.064418] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c15440) on tqpair(0x1b91ec0): expected_datao=0, payload_size=4096 00:21:58.687 [2024-07-24 19:57:50.064422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.064675] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.064679] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.064825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.687 [2024-07-24 19:57:50.064835] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.687 [2024-07-24 19:57:50.064838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.064842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c15440) on tqpair=0x1b91ec0 00:21:58.687 [2024-07-24 19:57:50.064855] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:58.687 [2024-07-24 19:57:50.064866] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:58.687 [2024-07-24 19:57:50.064874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.064877] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b91ec0) 00:21:58.687 [2024-07-24 19:57:50.064884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.687 [2024-07-24 19:57:50.064897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c15440, cid 4, qid 0 00:21:58.687 [2024-07-24 19:57:50.065056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.687 [2024-07-24 19:57:50.065067] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.687 [2024-07-24 19:57:50.065071] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.065074] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b91ec0): datao=0, datal=4096, cccid=4 00:21:58.687 [2024-07-24 19:57:50.065078] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c15440) on tqpair(0x1b91ec0): expected_datao=0, payload_size=4096 00:21:58.687 [2024-07-24 19:57:50.065082] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.065319] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.065323] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.065467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.687 [2024-07-24 19:57:50.065477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.687 [2024-07-24 19:57:50.065484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.065488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c15440) on tqpair=0x1b91ec0 00:21:58.687 [2024-07-24 19:57:50.065496] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:58.687 [2024-07-24 19:57:50.065505] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:58.687 [2024-07-24 19:57:50.065528] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:58.687 [2024-07-24 19:57:50.065536] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:58.687 [2024-07-24 19:57:50.065540] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:58.687 [2024-07-24 19:57:50.065545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:58.687 [2024-07-24 19:57:50.065549] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:58.687 [2024-07-24 19:57:50.065553] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:58.687 [2024-07-24 19:57:50.065558] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:58.687 [2024-07-24 19:57:50.065572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.065576] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b91ec0) 00:21:58.687 [2024-07-24 19:57:50.065582] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.687 [2024-07-24 19:57:50.065588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.065592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.065595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b91ec0) 00:21:58.687 [2024-07-24 19:57:50.065600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.687 [2024-07-24 19:57:50.065616] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c15440, cid 4, qid 0 00:21:58.687 [2024-07-24 19:57:50.065621] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c155c0, cid 5, qid 0 00:21:58.687 [2024-07-24 19:57:50.065780] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.687 [2024-07-24 19:57:50.065790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.687 [2024-07-24 19:57:50.065794] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.065797] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c15440) on tqpair=0x1b91ec0 00:21:58.687 [2024-07-24 19:57:50.065803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.687 [2024-07-24 19:57:50.065809] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.687 [2024-07-24 19:57:50.065812] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.065815] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c155c0) on tqpair=0x1b91ec0 00:21:58.687 [2024-07-24 19:57:50.065826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.687 [2024-07-24 19:57:50.065829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b91ec0) 00:21:58.687 [2024-07-24 19:57:50.065836] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.687 [2024-07-24 19:57:50.065848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c155c0, cid 5, qid 0 00:21:58.687 [2024-07-24 19:57:50.066204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.687 [2024-07-24 19:57:50.066210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.688 [2024-07-24 19:57:50.066214] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.066217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c155c0) on tqpair=0x1b91ec0 00:21:58.688 [2024-07-24 19:57:50.066226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.066230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b91ec0) 00:21:58.688 [2024-07-24 19:57:50.066236] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.688 [2024-07-24 19:57:50.066246] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c155c0, cid 5, qid 0 00:21:58.688 [2024-07-24 19:57:50.066395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.688 [2024-07-24 19:57:50.066405] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.688 [2024-07-24 19:57:50.066409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.066412] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c155c0) on tqpair=0x1b91ec0 00:21:58.688 [2024-07-24 19:57:50.066423] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.066427] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b91ec0) 00:21:58.688 [2024-07-24 19:57:50.066433] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.688 [2024-07-24 19:57:50.066446] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c155c0, cid 5, qid 0 00:21:58.688 [2024-07-24 19:57:50.066631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.688 [2024-07-24 19:57:50.066641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.688 [2024-07-24 19:57:50.066644] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.066648] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c155c0) on tqpair=0x1b91ec0 00:21:58.688 [2024-07-24 19:57:50.066666] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.066671] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b91ec0) 00:21:58.688 [2024-07-24 19:57:50.066677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.688 [2024-07-24 19:57:50.066683] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.066686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b91ec0) 00:21:58.688 [2024-07-24 19:57:50.066692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.688 [2024-07-24 19:57:50.066698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.066701] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1b91ec0) 00:21:58.688 [2024-07-24 19:57:50.066707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.688 [2024-07-24 19:57:50.066714] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.066717] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b91ec0) 00:21:58.688 [2024-07-24 19:57:50.066722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.688 [2024-07-24 19:57:50.066736] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c155c0, cid 5, qid 0 00:21:58.688 [2024-07-24 19:57:50.066744] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c15440, cid 4, qid 0 00:21:58.688 [2024-07-24 19:57:50.066748] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c15740, cid 6, qid 0 00:21:58.688 [2024-07-24 19:57:50.066753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c158c0, cid 7, qid 0 00:21:58.688 [2024-07-24 19:57:50.066960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.688 [2024-07-24 19:57:50.066970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.688 [2024-07-24 19:57:50.066973] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.066976] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b91ec0): datao=0, datal=8192, cccid=5 00:21:58.688 [2024-07-24 19:57:50.066981] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c155c0) on tqpair(0x1b91ec0): expected_datao=0, payload_size=8192 00:21:58.688 [2024-07-24 19:57:50.066985] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.067504] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.067509] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.067514] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.688 [2024-07-24 19:57:50.067518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.688 [2024-07-24 19:57:50.067521] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.067524] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b91ec0): datao=0, datal=512, cccid=4 00:21:58.688 [2024-07-24 19:57:50.067528] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c15440) on tqpair(0x1b91ec0): expected_datao=0, payload_size=512 00:21:58.688 [2024-07-24 19:57:50.067532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.067538] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.067541] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.067546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.688 [2024-07-24 19:57:50.067551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.688 [2024-07-24 19:57:50.067554] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.067557] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b91ec0): datao=0, datal=512, cccid=6 00:21:58.688 [2024-07-24 19:57:50.067560] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c15740) on tqpair(0x1b91ec0): expected_datao=0, payload_size=512 00:21:58.688 [2024-07-24 19:57:50.067564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.067570] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.067573] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.067577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.688 [2024-07-24 19:57:50.067583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.688 [2024-07-24 19:57:50.067586] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.688 [2024-07-24 19:57:50.067588] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b91ec0): datao=0, datal=4096, cccid=7 00:21:58.689 [2024-07-24 19:57:50.067592] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c158c0) on tqpair(0x1b91ec0): expected_datao=0, payload_size=4096 00:21:58.689 [2024-07-24 19:57:50.067596] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.689 [2024-07-24 19:57:50.067602] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.689 [2024-07-24 19:57:50.067605] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.689 [2024-07-24 19:57:50.067811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.689 [2024-07-24 19:57:50.067817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.689 [2024-07-24 19:57:50.067820] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.689 [2024-07-24 19:57:50.067826] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c155c0) on tqpair=0x1b91ec0 00:21:58.689 [2024-07-24 19:57:50.067839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.689 [2024-07-24 19:57:50.067844] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.689 [2024-07-24 19:57:50.067847] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.689 [2024-07-24 19:57:50.067850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c15440) on tqpair=0x1b91ec0 00:21:58.689 [2024-07-24 19:57:50.067859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.689 [2024-07-24 19:57:50.067865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.689 [2024-07-24 19:57:50.067868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.689 [2024-07-24 19:57:50.067871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c15740) on tqpair=0x1b91ec0 00:21:58.689 [2024-07-24 19:57:50.067877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.689 [2024-07-24 19:57:50.067882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.689 [2024-07-24 19:57:50.067885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.689 [2024-07-24 19:57:50.067888] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c158c0) on tqpair=0x1b91ec0 00:21:58.689 ===================================================== 00:21:58.689 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:58.689 ===================================================== 00:21:58.689 Controller Capabilities/Features 00:21:58.689 ================================ 00:21:58.689 Vendor ID: 8086 00:21:58.689 Subsystem Vendor ID: 8086 00:21:58.689 Serial Number: SPDK00000000000001 00:21:58.689 Model Number: SPDK bdev Controller 00:21:58.689 Firmware Version: 24.09 00:21:58.689 Recommended Arb Burst: 6 00:21:58.689 IEEE OUI Identifier: e4 d2 5c 00:21:58.689 Multi-path I/O 00:21:58.689 May have multiple subsystem ports: Yes 00:21:58.689 May have multiple controllers: Yes 00:21:58.689 Associated with SR-IOV VF: No 00:21:58.689 Max Data Transfer Size: 131072 00:21:58.689 Max Number of Namespaces: 32 00:21:58.689 Max Number of I/O Queues: 127 00:21:58.689 NVMe Specification Version (VS): 1.3 00:21:58.689 NVMe Specification Version (Identify): 1.3 00:21:58.689 Maximum Queue Entries: 128 00:21:58.689 Contiguous Queues Required: Yes 00:21:58.689 Arbitration Mechanisms Supported 00:21:58.689 Weighted Round Robin: Not Supported 00:21:58.689 Vendor Specific: Not Supported 00:21:58.689 Reset Timeout: 15000 ms 00:21:58.689 Doorbell Stride: 4 bytes 00:21:58.689 NVM Subsystem Reset: Not Supported 00:21:58.689 Command Sets Supported 00:21:58.689 NVM Command Set: Supported 00:21:58.689 Boot Partition: Not Supported 00:21:58.689 Memory Page Size Minimum: 4096 bytes 00:21:58.689 Memory Page Size Maximum: 4096 bytes 00:21:58.689 Persistent Memory Region: Not Supported 00:21:58.689 Optional Asynchronous Events Supported 00:21:58.689 Namespace Attribute Notices: Supported 00:21:58.689 Firmware Activation Notices: Not Supported 00:21:58.689 ANA Change Notices: Not Supported 00:21:58.689 PLE Aggregate Log Change Notices: Not Supported 00:21:58.689 LBA Status Info Alert Notices: Not Supported 00:21:58.689 EGE Aggregate Log Change Notices: Not Supported 00:21:58.689 Normal NVM Subsystem Shutdown event: Not Supported 00:21:58.689 Zone Descriptor Change Notices: Not Supported 00:21:58.689 Discovery Log Change Notices: Not Supported 00:21:58.689 Controller Attributes 00:21:58.689 128-bit Host Identifier: Supported 00:21:58.689 Non-Operational Permissive Mode: Not Supported 00:21:58.689 NVM Sets: Not Supported 00:21:58.689 Read Recovery Levels: Not Supported 00:21:58.689 Endurance Groups: Not Supported 00:21:58.689 Predictable Latency Mode: Not Supported 00:21:58.689 Traffic Based Keep ALive: Not Supported 00:21:58.689 Namespace Granularity: Not Supported 00:21:58.689 SQ Associations: Not Supported 00:21:58.689 UUID List: Not Supported 00:21:58.689 Multi-Domain Subsystem: Not Supported 00:21:58.689 Fixed Capacity Management: Not Supported 00:21:58.689 Variable Capacity Management: Not Supported 00:21:58.689 Delete Endurance Group: Not Supported 00:21:58.689 Delete NVM Set: Not Supported 00:21:58.689 Extended LBA Formats Supported: Not Supported 00:21:58.689 Flexible Data Placement Supported: Not Supported 00:21:58.689 00:21:58.689 Controller Memory Buffer Support 00:21:58.689 ================================ 00:21:58.689 Supported: No 00:21:58.689 00:21:58.689 Persistent Memory Region Support 00:21:58.690 ================================ 00:21:58.690 Supported: No 00:21:58.690 00:21:58.690 Admin Command Set Attributes 00:21:58.690 ============================ 00:21:58.690 Security Send/Receive: Not Supported 00:21:58.690 Format NVM: Not Supported 00:21:58.690 Firmware Activate/Download: Not Supported 00:21:58.690 Namespace Management: Not Supported 00:21:58.690 Device Self-Test: Not Supported 00:21:58.690 Directives: Not Supported 00:21:58.690 NVMe-MI: Not Supported 00:21:58.690 Virtualization Management: Not Supported 00:21:58.690 Doorbell Buffer Config: Not Supported 00:21:58.690 Get LBA Status Capability: Not Supported 00:21:58.690 Command & Feature Lockdown Capability: Not Supported 00:21:58.690 Abort Command Limit: 4 00:21:58.690 Async Event Request Limit: 4 00:21:58.690 Number of Firmware Slots: N/A 00:21:58.690 Firmware Slot 1 Read-Only: N/A 00:21:58.690 Firmware Activation Without Reset: N/A 00:21:58.690 Multiple Update Detection Support: N/A 00:21:58.690 Firmware Update Granularity: No Information Provided 00:21:58.690 Per-Namespace SMART Log: No 00:21:58.690 Asymmetric Namespace Access Log Page: Not Supported 00:21:58.690 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:58.690 Command Effects Log Page: Supported 00:21:58.690 Get Log Page Extended Data: Supported 00:21:58.690 Telemetry Log Pages: Not Supported 00:21:58.690 Persistent Event Log Pages: Not Supported 00:21:58.690 Supported Log Pages Log Page: May Support 00:21:58.690 Commands Supported & Effects Log Page: Not Supported 00:21:58.690 Feature Identifiers & Effects Log Page:May Support 00:21:58.690 NVMe-MI Commands & Effects Log Page: May Support 00:21:58.690 Data Area 4 for Telemetry Log: Not Supported 00:21:58.690 Error Log Page Entries Supported: 128 00:21:58.690 Keep Alive: Supported 00:21:58.690 Keep Alive Granularity: 10000 ms 00:21:58.690 00:21:58.690 NVM Command Set Attributes 00:21:58.690 ========================== 00:21:58.690 Submission Queue Entry Size 00:21:58.690 Max: 64 00:21:58.690 Min: 64 00:21:58.690 Completion Queue Entry Size 00:21:58.690 Max: 16 00:21:58.690 Min: 16 00:21:58.690 Number of Namespaces: 32 00:21:58.690 Compare Command: Supported 00:21:58.690 Write Uncorrectable Command: Not Supported 00:21:58.690 Dataset Management Command: Supported 00:21:58.690 Write Zeroes Command: Supported 00:21:58.690 Set Features Save Field: Not Supported 00:21:58.690 Reservations: Supported 00:21:58.690 Timestamp: Not Supported 00:21:58.690 Copy: Supported 00:21:58.690 Volatile Write Cache: Present 00:21:58.690 Atomic Write Unit (Normal): 1 00:21:58.690 Atomic Write Unit (PFail): 1 00:21:58.690 Atomic Compare & Write Unit: 1 00:21:58.690 Fused Compare & Write: Supported 00:21:58.690 Scatter-Gather List 00:21:58.690 SGL Command Set: Supported 00:21:58.690 SGL Keyed: Supported 00:21:58.690 SGL Bit Bucket Descriptor: Not Supported 00:21:58.690 SGL Metadata Pointer: Not Supported 00:21:58.690 Oversized SGL: Not Supported 00:21:58.690 SGL Metadata Address: Not Supported 00:21:58.690 SGL Offset: Supported 00:21:58.690 Transport SGL Data Block: Not Supported 00:21:58.690 Replay Protected Memory Block: Not Supported 00:21:58.690 00:21:58.690 Firmware Slot Information 00:21:58.690 ========================= 00:21:58.690 Active slot: 1 00:21:58.690 Slot 1 Firmware Revision: 24.09 00:21:58.690 00:21:58.690 00:21:58.690 Commands Supported and Effects 00:21:58.690 ============================== 00:21:58.690 Admin Commands 00:21:58.690 -------------- 00:21:58.690 Get Log Page (02h): Supported 00:21:58.690 Identify (06h): Supported 00:21:58.690 Abort (08h): Supported 00:21:58.690 Set Features (09h): Supported 00:21:58.690 Get Features (0Ah): Supported 00:21:58.690 Asynchronous Event Request (0Ch): Supported 00:21:58.690 Keep Alive (18h): Supported 00:21:58.690 I/O Commands 00:21:58.690 ------------ 00:21:58.690 Flush (00h): Supported LBA-Change 00:21:58.690 Write (01h): Supported LBA-Change 00:21:58.690 Read (02h): Supported 00:21:58.690 Compare (05h): Supported 00:21:58.690 Write Zeroes (08h): Supported LBA-Change 00:21:58.690 Dataset Management (09h): Supported LBA-Change 00:21:58.690 Copy (19h): Supported LBA-Change 00:21:58.690 00:21:58.690 Error Log 00:21:58.690 ========= 00:21:58.690 00:21:58.690 Arbitration 00:21:58.690 =========== 00:21:58.690 Arbitration Burst: 1 00:21:58.690 00:21:58.691 Power Management 00:21:58.691 ================ 00:21:58.691 Number of Power States: 1 00:21:58.691 Current Power State: Power State #0 00:21:58.691 Power State #0: 00:21:58.691 Max Power: 0.00 W 00:21:58.691 Non-Operational State: Operational 00:21:58.691 Entry Latency: Not Reported 00:21:58.691 Exit Latency: Not Reported 00:21:58.691 Relative Read Throughput: 0 00:21:58.691 Relative Read Latency: 0 00:21:58.691 Relative Write Throughput: 0 00:21:58.691 Relative Write Latency: 0 00:21:58.691 Idle Power: Not Reported 00:21:58.691 Active Power: Not Reported 00:21:58.691 Non-Operational Permissive Mode: Not Supported 00:21:58.691 00:21:58.691 Health Information 00:21:58.691 ================== 00:21:58.691 Critical Warnings: 00:21:58.691 Available Spare Space: OK 00:21:58.691 Temperature: OK 00:21:58.691 Device Reliability: OK 00:21:58.691 Read Only: No 00:21:58.691 Volatile Memory Backup: OK 00:21:58.691 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:58.691 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:58.691 Available Spare: 0% 00:21:58.691 Available Spare Threshold: 0% 00:21:58.691 Life Percentage Used:[2024-07-24 19:57:50.067977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.691 [2024-07-24 19:57:50.067982] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b91ec0) 00:21:58.691 [2024-07-24 19:57:50.067988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.691 [2024-07-24 19:57:50.068001] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c158c0, cid 7, qid 0 00:21:58.691 [2024-07-24 19:57:50.072075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.691 [2024-07-24 19:57:50.072083] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.691 [2024-07-24 19:57:50.072086] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.691 [2024-07-24 19:57:50.072090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c158c0) on tqpair=0x1b91ec0 00:21:58.691 [2024-07-24 19:57:50.072118] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:58.691 [2024-07-24 19:57:50.072127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c14e40) on tqpair=0x1b91ec0 00:21:58.691 [2024-07-24 19:57:50.072132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.691 [2024-07-24 19:57:50.072137] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c14fc0) on tqpair=0x1b91ec0 00:21:58.691 [2024-07-24 19:57:50.072141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.691 [2024-07-24 19:57:50.072145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c15140) on tqpair=0x1b91ec0 00:21:58.691 [2024-07-24 19:57:50.072149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.691 [2024-07-24 19:57:50.072153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.691 [2024-07-24 19:57:50.072157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.691 [2024-07-24 19:57:50.072165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.691 [2024-07-24 19:57:50.072169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.691 [2024-07-24 19:57:50.072172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.691 [2024-07-24 19:57:50.072178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.691 [2024-07-24 19:57:50.072191] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.691 [2024-07-24 19:57:50.072422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.691 [2024-07-24 19:57:50.072433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.691 [2024-07-24 19:57:50.072436] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.691 [2024-07-24 19:57:50.072440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.691 [2024-07-24 19:57:50.072447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.691 [2024-07-24 19:57:50.072450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.691 [2024-07-24 19:57:50.072453] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.692 [2024-07-24 19:57:50.072460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.692 [2024-07-24 19:57:50.072477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.692 [2024-07-24 19:57:50.072659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.692 [2024-07-24 19:57:50.072669] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.692 [2024-07-24 19:57:50.072672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.072675] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.692 [2024-07-24 19:57:50.072679] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:58.692 [2024-07-24 19:57:50.072683] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:58.692 [2024-07-24 19:57:50.072694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.072698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.072701] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.692 [2024-07-24 19:57:50.072707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.692 [2024-07-24 19:57:50.072719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.692 [2024-07-24 19:57:50.072909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.692 [2024-07-24 19:57:50.072919] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.692 [2024-07-24 19:57:50.072922] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.072926] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.692 [2024-07-24 19:57:50.072937] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.072941] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.072944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.692 [2024-07-24 19:57:50.072950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.692 [2024-07-24 19:57:50.072962] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.692 [2024-07-24 19:57:50.073147] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.692 [2024-07-24 19:57:50.073157] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.692 [2024-07-24 19:57:50.073161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.073165] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.692 [2024-07-24 19:57:50.073176] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.073180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.073183] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.692 [2024-07-24 19:57:50.073189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.692 [2024-07-24 19:57:50.073204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.692 [2024-07-24 19:57:50.073350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.692 [2024-07-24 19:57:50.073360] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.692 [2024-07-24 19:57:50.073363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.073366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.692 [2024-07-24 19:57:50.073377] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.073381] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.073384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.692 [2024-07-24 19:57:50.073390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.692 [2024-07-24 19:57:50.073403] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.692 [2024-07-24 19:57:50.073588] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.692 [2024-07-24 19:57:50.073597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.692 [2024-07-24 19:57:50.073600] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.073604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.692 [2024-07-24 19:57:50.073615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.073619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.073622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.692 [2024-07-24 19:57:50.073629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.692 [2024-07-24 19:57:50.073640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.692 [2024-07-24 19:57:50.073788] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.692 [2024-07-24 19:57:50.073798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.692 [2024-07-24 19:57:50.073801] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.073804] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.692 [2024-07-24 19:57:50.073815] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.073819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.073822] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.692 [2024-07-24 19:57:50.073829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.692 [2024-07-24 19:57:50.073840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.692 [2024-07-24 19:57:50.074025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.692 [2024-07-24 19:57:50.074035] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.692 [2024-07-24 19:57:50.074038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.692 [2024-07-24 19:57:50.074041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.692 [2024-07-24 19:57:50.074059] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.074062] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.074066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.693 [2024-07-24 19:57:50.074072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.693 [2024-07-24 19:57:50.074085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.693 [2024-07-24 19:57:50.074227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.693 [2024-07-24 19:57:50.074237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.693 [2024-07-24 19:57:50.074240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.074244] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.693 [2024-07-24 19:57:50.074255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.074259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.074262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.693 [2024-07-24 19:57:50.074268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.693 [2024-07-24 19:57:50.074280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.693 [2024-07-24 19:57:50.074428] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.693 [2024-07-24 19:57:50.074437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.693 [2024-07-24 19:57:50.074440] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.074444] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.693 [2024-07-24 19:57:50.074455] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.074458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.074461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.693 [2024-07-24 19:57:50.074468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.693 [2024-07-24 19:57:50.074480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.693 [2024-07-24 19:57:50.074628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.693 [2024-07-24 19:57:50.074638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.693 [2024-07-24 19:57:50.074641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.074645] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.693 [2024-07-24 19:57:50.074655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.074659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.074662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.693 [2024-07-24 19:57:50.074668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.693 [2024-07-24 19:57:50.074680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.693 [2024-07-24 19:57:50.074865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.693 [2024-07-24 19:57:50.074874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.693 [2024-07-24 19:57:50.074877] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.074881] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.693 [2024-07-24 19:57:50.074892] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.074895] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.074898] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.693 [2024-07-24 19:57:50.074905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.693 [2024-07-24 19:57:50.074917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.693 [2024-07-24 19:57:50.075068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.693 [2024-07-24 19:57:50.075079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.693 [2024-07-24 19:57:50.075082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.075086] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.693 [2024-07-24 19:57:50.075097] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.075101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.075104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.693 [2024-07-24 19:57:50.075110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.693 [2024-07-24 19:57:50.075122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.693 [2024-07-24 19:57:50.075304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.693 [2024-07-24 19:57:50.075314] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.693 [2024-07-24 19:57:50.075317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.075321] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.693 [2024-07-24 19:57:50.075332] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.075335] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.075338] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.693 [2024-07-24 19:57:50.075345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.693 [2024-07-24 19:57:50.075356] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.693 [2024-07-24 19:57:50.075506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.693 [2024-07-24 19:57:50.075516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.693 [2024-07-24 19:57:50.075519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.075522] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.693 [2024-07-24 19:57:50.075533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.075537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.075540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.693 [2024-07-24 19:57:50.075546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.693 [2024-07-24 19:57:50.075558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.693 [2024-07-24 19:57:50.075708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.693 [2024-07-24 19:57:50.075718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.693 [2024-07-24 19:57:50.075720] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.075724] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.693 [2024-07-24 19:57:50.075735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.075738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-24 19:57:50.075742] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.694 [2024-07-24 19:57:50.075748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.694 [2024-07-24 19:57:50.075760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.694 [2024-07-24 19:57:50.075902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.694 [2024-07-24 19:57:50.075911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.694 [2024-07-24 19:57:50.075917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.694 [2024-07-24 19:57:50.075921] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.694 [2024-07-24 19:57:50.075931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.694 [2024-07-24 19:57:50.075935] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.694 [2024-07-24 19:57:50.075938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.694 [2024-07-24 19:57:50.075944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.694 [2024-07-24 19:57:50.075956] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.694 [2024-07-24 19:57:50.080051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.694 [2024-07-24 19:57:50.080059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.694 [2024-07-24 19:57:50.080062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.694 [2024-07-24 19:57:50.080066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.694 [2024-07-24 19:57:50.080075] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.694 [2024-07-24 19:57:50.080079] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.694 [2024-07-24 19:57:50.080082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b91ec0) 00:21:58.694 [2024-07-24 19:57:50.080088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.694 [2024-07-24 19:57:50.080100] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c152c0, cid 3, qid 0 00:21:58.694 [2024-07-24 19:57:50.080364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.694 [2024-07-24 19:57:50.080374] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.694 [2024-07-24 19:57:50.080378] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.694 [2024-07-24 19:57:50.080382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c152c0) on tqpair=0x1b91ec0 00:21:58.694 [2024-07-24 19:57:50.080390] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:21:58.694 0% 00:21:58.694 Data Units Read: 0 00:21:58.694 Data Units Written: 0 00:21:58.694 Host Read Commands: 0 00:21:58.694 Host Write Commands: 0 00:21:58.694 Controller Busy Time: 0 minutes 00:21:58.694 Power Cycles: 0 00:21:58.694 Power On Hours: 0 hours 00:21:58.694 Unsafe Shutdowns: 0 00:21:58.694 Unrecoverable Media Errors: 0 00:21:58.694 Lifetime Error Log Entries: 0 00:21:58.694 Warning Temperature Time: 0 minutes 00:21:58.694 Critical Temperature Time: 0 minutes 00:21:58.694 00:21:58.694 Number of Queues 00:21:58.694 ================ 00:21:58.694 Number of I/O Submission Queues: 127 00:21:58.694 Number of I/O Completion Queues: 127 00:21:58.694 00:21:58.694 Active Namespaces 00:21:58.694 ================= 00:21:58.694 Namespace ID:1 00:21:58.694 Error Recovery Timeout: Unlimited 00:21:58.694 Command Set Identifier: NVM (00h) 00:21:58.694 Deallocate: Supported 00:21:58.694 Deallocated/Unwritten Error: Not Supported 00:21:58.694 Deallocated Read Value: Unknown 00:21:58.694 Deallocate in Write Zeroes: Not Supported 00:21:58.694 Deallocated Guard Field: 0xFFFF 00:21:58.694 Flush: Supported 00:21:58.694 Reservation: Supported 00:21:58.694 Namespace Sharing Capabilities: Multiple Controllers 00:21:58.694 Size (in LBAs): 131072 (0GiB) 00:21:58.694 Capacity (in LBAs): 131072 (0GiB) 00:21:58.694 Utilization (in LBAs): 131072 (0GiB) 00:21:58.694 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:58.694 EUI64: ABCDEF0123456789 00:21:58.694 UUID: e3703dc4-a7c6-4931-a60d-c695b9852be9 00:21:58.694 Thin Provisioning: Not Supported 00:21:58.694 Per-NS Atomic Units: Yes 00:21:58.694 Atomic Boundary Size (Normal): 0 00:21:58.694 Atomic Boundary Size (PFail): 0 00:21:58.694 Atomic Boundary Offset: 0 00:21:58.694 Maximum Single Source Range Length: 65535 00:21:58.694 Maximum Copy Length: 65535 00:21:58.694 Maximum Source Range Count: 1 00:21:58.694 NGUID/EUI64 Never Reused: No 00:21:58.694 Namespace Write Protected: No 00:21:58.694 Number of LBA Formats: 1 00:21:58.694 Current LBA Format: LBA Format #00 00:21:58.694 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:58.694 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:58.694 rmmod nvme_tcp 00:21:58.694 rmmod nvme_fabrics 00:21:58.694 rmmod nvme_keyring 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2128606 ']' 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2128606 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2128606 ']' 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2128606 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2128606 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2128606' 00:21:58.694 killing process with pid 2128606 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2128606 00:21:58.694 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2128606 00:21:58.954 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:58.954 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:58.954 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:58.954 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.954 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:58.954 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.954 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.954 19:57:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.924 19:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:00.924 00:22:00.924 real 0m9.228s 00:22:00.924 user 0m7.764s 00:22:00.924 sys 0m4.371s 00:22:00.924 19:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:00.924 19:57:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:00.924 ************************************ 00:22:00.924 END TEST nvmf_identify 00:22:00.924 ************************************ 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.185 ************************************ 00:22:01.185 START TEST nvmf_perf 00:22:01.185 ************************************ 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:01.185 * Looking for test storage... 00:22:01.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:01.185 19:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:06.469 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:06.469 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:06.469 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:06.470 Found net devices under 0000:86:00.0: cvl_0_0 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:06.470 Found net devices under 0000:86:00.1: cvl_0_1 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.470 19:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.470 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.470 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.470 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:06.470 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:06.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:22:06.730 00:22:06.730 --- 10.0.0.2 ping statistics --- 00:22:06.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.730 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:22:06.730 00:22:06.730 --- 10.0.0.1 ping statistics --- 00:22:06.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.730 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2132325 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2132325 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2132325 ']' 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.730 19:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:06.730 [2024-07-24 19:57:58.206362] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:22:06.730 [2024-07-24 19:57:58.206406] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.730 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.730 [2024-07-24 19:57:58.264734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.989 [2024-07-24 19:57:58.350229] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.989 [2024-07-24 19:57:58.350266] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.989 [2024-07-24 19:57:58.350273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.989 [2024-07-24 19:57:58.350279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.989 [2024-07-24 19:57:58.350284] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.990 [2024-07-24 19:57:58.350315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.990 [2024-07-24 19:57:58.350411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.990 [2024-07-24 19:57:58.350525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.990 [2024-07-24 19:57:58.350526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.559 19:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.559 19:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:22:07.559 19:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:07.559 19:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:07.559 19:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:07.559 19:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.559 19:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:07.559 19:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:10.848 19:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:10.848 19:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:10.848 19:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:10.848 19:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:11.107 19:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:11.107 19:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:11.107 19:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:11.107 19:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:11.107 19:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:11.107 [2024-07-24 19:58:02.636158] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.107 19:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:11.366 19:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:11.366 19:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:11.624 19:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:11.624 19:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:11.624 19:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:11.882 [2024-07-24 19:58:03.374908] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.882 19:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:12.141 19:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:12.141 19:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:12.141 19:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:12.141 19:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:13.522 Initializing NVMe Controllers 00:22:13.522 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:13.522 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:13.522 Initialization complete. Launching workers. 00:22:13.522 ======================================================== 00:22:13.522 Latency(us) 00:22:13.522 Device Information : IOPS MiB/s Average min max 00:22:13.522 PCIE (0000:5e:00.0) NSID 1 from core 0: 97572.77 381.14 327.43 25.17 7288.58 00:22:13.522 ======================================================== 00:22:13.522 Total : 97572.77 381.14 327.43 25.17 7288.58 00:22:13.522 00:22:13.522 19:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:13.522 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.902 Initializing NVMe Controllers 00:22:14.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:14.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:14.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:14.902 Initialization complete. Launching workers. 00:22:14.902 ======================================================== 00:22:14.902 Latency(us) 00:22:14.902 Device Information : IOPS MiB/s Average min max 00:22:14.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 95.66 0.37 10656.39 560.77 45451.51 00:22:14.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 54.81 0.21 18825.93 7966.84 47889.54 00:22:14.902 ======================================================== 00:22:14.902 Total : 150.47 0.59 13632.05 560.77 47889.54 00:22:14.902 00:22:14.902 19:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:14.902 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.284 Initializing NVMe Controllers 00:22:16.284 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:16.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:16.284 Initialization complete. Launching workers. 00:22:16.284 ======================================================== 00:22:16.284 Latency(us) 00:22:16.284 Device Information : IOPS MiB/s Average min max 00:22:16.284 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8027.99 31.36 4004.07 758.17 8540.06 00:22:16.284 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3778.00 14.76 8509.14 6321.87 16167.55 00:22:16.284 ======================================================== 00:22:16.284 Total : 11805.99 46.12 5445.73 758.17 16167.55 00:22:16.284 00:22:16.284 19:58:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:16.284 19:58:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:16.284 19:58:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:16.284 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.826 Initializing NVMe Controllers 00:22:18.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:18.826 Controller IO queue size 128, less than required. 00:22:18.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:18.826 Controller IO queue size 128, less than required. 00:22:18.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:18.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:18.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:18.826 Initialization complete. Launching workers. 00:22:18.826 ======================================================== 00:22:18.826 Latency(us) 00:22:18.826 Device Information : IOPS MiB/s Average min max 00:22:18.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 793.42 198.35 167656.61 111780.09 282641.77 00:22:18.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 572.22 143.05 237400.35 86410.62 357324.78 00:22:18.826 ======================================================== 00:22:18.826 Total : 1365.63 341.41 196880.13 86410.62 357324.78 00:22:18.826 00:22:18.826 19:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:18.826 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.086 No valid NVMe controllers or AIO or URING devices found 00:22:19.086 Initializing NVMe Controllers 00:22:19.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:19.086 Controller IO queue size 128, less than required. 00:22:19.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.086 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:19.086 Controller IO queue size 128, less than required. 00:22:19.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.086 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:19.086 WARNING: Some requested NVMe devices were skipped 00:22:19.086 19:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:19.086 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.697 Initializing NVMe Controllers 00:22:21.697 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.697 Controller IO queue size 128, less than required. 00:22:21.697 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.697 Controller IO queue size 128, less than required. 00:22:21.697 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:21.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:21.697 Initialization complete. Launching workers. 00:22:21.697 00:22:21.697 ==================== 00:22:21.697 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:21.697 TCP transport: 00:22:21.697 polls: 59585 00:22:21.697 idle_polls: 24110 00:22:21.697 sock_completions: 35475 00:22:21.697 nvme_completions: 3199 00:22:21.697 submitted_requests: 4856 00:22:21.697 queued_requests: 1 00:22:21.697 00:22:21.697 ==================== 00:22:21.697 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:21.697 TCP transport: 00:22:21.697 polls: 62515 00:22:21.697 idle_polls: 21563 00:22:21.697 sock_completions: 40952 00:22:21.697 nvme_completions: 3291 00:22:21.697 submitted_requests: 4926 00:22:21.697 queued_requests: 1 00:22:21.697 ======================================================== 00:22:21.697 Latency(us) 00:22:21.697 Device Information : IOPS MiB/s Average min max 00:22:21.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 798.57 199.64 164309.74 86571.29 287818.31 00:22:21.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 821.55 205.39 160891.89 71105.55 245275.15 00:22:21.697 ======================================================== 00:22:21.697 Total : 1620.12 405.03 162576.58 71105.55 287818.31 00:22:21.697 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:21.697 rmmod nvme_tcp 00:22:21.697 rmmod nvme_fabrics 00:22:21.697 rmmod nvme_keyring 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2132325 ']' 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2132325 00:22:21.697 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2132325 ']' 00:22:21.698 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2132325 00:22:21.698 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:22:21.698 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.698 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2132325 00:22:21.960 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:21.960 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:21.960 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2132325' 00:22:21.960 killing process with pid 2132325 00:22:21.960 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2132325 00:22:21.960 19:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2132325 00:22:23.338 19:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.338 19:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.338 19:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.338 19:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.338 19:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.338 19:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.338 19:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.338 19:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.877 19:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:25.877 00:22:25.877 real 0m24.306s 00:22:25.877 user 1m6.214s 00:22:25.877 sys 0m6.771s 00:22:25.877 19:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:25.877 19:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:25.877 ************************************ 00:22:25.877 END TEST nvmf_perf 00:22:25.877 ************************************ 00:22:25.877 19:58:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:25.877 19:58:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:25.877 19:58:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:25.877 19:58:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.877 ************************************ 00:22:25.877 START TEST nvmf_fio_host 00:22:25.877 ************************************ 00:22:25.878 19:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:25.878 * Looking for test storage... 00:22:25.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:25.878 19:58:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.157 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:31.158 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:31.158 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:31.158 Found net devices under 0000:86:00.0: cvl_0_0 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:31.158 Found net devices under 0000:86:00.1: cvl_0_1 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.158 19:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:31.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:22:31.158 00:22:31.158 --- 10.0.0.2 ping statistics --- 00:22:31.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.158 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:22:31.158 00:22:31.158 --- 10.0.0.1 ping statistics --- 00:22:31.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.158 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2138316 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2138316 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2138316 ']' 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.158 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.158 [2024-07-24 19:58:22.107648] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:22:31.158 [2024-07-24 19:58:22.107690] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.158 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.158 [2024-07-24 19:58:22.165883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.158 [2024-07-24 19:58:22.251645] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.158 [2024-07-24 19:58:22.251678] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.159 [2024-07-24 19:58:22.251685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.159 [2024-07-24 19:58:22.251692] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.159 [2024-07-24 19:58:22.251697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.159 [2024-07-24 19:58:22.251738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.159 [2024-07-24 19:58:22.251858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.159 [2024-07-24 19:58:22.251868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.159 [2024-07-24 19:58:22.251870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.418 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.418 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:22:31.419 19:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:31.678 [2024-07-24 19:58:23.083931] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.678 19:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:31.678 19:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.678 19:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.678 19:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:31.938 Malloc1 00:22:31.938 19:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:32.198 19:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:32.198 19:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:32.458 [2024-07-24 19:58:23.894037] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.458 19:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:32.718 19:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:32.978 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:32.978 fio-3.35 00:22:32.978 Starting 1 thread 00:22:32.978 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.518 00:22:35.518 test: (groupid=0, jobs=1): err= 0: pid=2138849: Wed Jul 24 19:58:26 2024 00:22:35.518 read: IOPS=11.0k, BW=43.1MiB/s (45.2MB/s)(86.4MiB/2003msec) 00:22:35.518 slat (nsec): min=1602, max=243227, avg=1775.14, stdev=2288.53 00:22:35.518 clat (usec): min=3327, max=58958, avg=6777.83, stdev=3401.33 00:22:35.518 lat (usec): min=3328, max=58959, avg=6779.60, stdev=3401.41 00:22:35.518 clat percentiles (usec): 00:22:35.518 | 1.00th=[ 4228], 5.00th=[ 4948], 10.00th=[ 5211], 20.00th=[ 5604], 00:22:35.518 | 30.00th=[ 5800], 40.00th=[ 5997], 50.00th=[ 6194], 60.00th=[ 6390], 00:22:35.518 | 70.00th=[ 6652], 80.00th=[ 7177], 90.00th=[ 8356], 95.00th=[10159], 00:22:35.518 | 99.00th=[14877], 99.50th=[17433], 99.90th=[57410], 99.95th=[58459], 00:22:35.518 | 99.99th=[58459] 00:22:35.518 bw ( KiB/s): min=41248, max=46752, per=99.78%, avg=44054.00, stdev=2349.22, samples=4 00:22:35.518 iops : min=10312, max=11688, avg=11013.50, stdev=587.30, samples=4 00:22:35.518 write: IOPS=11.0k, BW=43.0MiB/s (45.1MB/s)(86.1MiB/2003msec); 0 zone resets 00:22:35.518 slat (nsec): min=1629, max=244116, avg=1837.53, stdev=1830.33 00:22:35.518 clat (usec): min=1979, max=50563, avg=4787.47, stdev=2011.16 00:22:35.518 lat (usec): min=1981, max=50565, avg=4789.31, stdev=2011.34 00:22:35.518 clat percentiles (usec): 00:22:35.518 | 1.00th=[ 2802], 5.00th=[ 3326], 10.00th=[ 3687], 20.00th=[ 4080], 00:22:35.518 | 30.00th=[ 4359], 40.00th=[ 4555], 50.00th=[ 4686], 60.00th=[ 4817], 00:22:35.518 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5538], 95.00th=[ 6194], 00:22:35.518 | 99.00th=[ 8979], 99.50th=[10552], 99.90th=[47449], 99.95th=[50070], 00:22:35.518 | 99.99th=[50594] 00:22:35.518 bw ( KiB/s): min=41920, max=45704, per=99.94%, avg=43990.00, stdev=1588.38, samples=4 00:22:35.518 iops : min=10480, max=11426, avg=10997.50, stdev=397.09, samples=4 00:22:35.518 lat (msec) : 2=0.02%, 4=9.31%, 10=87.73%, 20=2.65%, 50=0.16% 00:22:35.518 lat (msec) : 100=0.13% 00:22:35.519 cpu : usr=68.93%, sys=25.12%, ctx=32, majf=0, minf=5 00:22:35.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:35.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:35.519 issued rwts: total=22108,22041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:35.519 00:22:35.519 Run status group 0 (all jobs): 00:22:35.519 READ: bw=43.1MiB/s (45.2MB/s), 43.1MiB/s-43.1MiB/s (45.2MB/s-45.2MB/s), io=86.4MiB (90.6MB), run=2003-2003msec 00:22:35.519 WRITE: bw=43.0MiB/s (45.1MB/s), 43.0MiB/s-43.0MiB/s (45.1MB/s-45.1MB/s), io=86.1MiB (90.3MB), run=2003-2003msec 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:35.519 19:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:35.519 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:35.519 fio-3.35 00:22:35.519 Starting 1 thread 00:22:35.519 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.057 00:22:38.057 test: (groupid=0, jobs=1): err= 0: pid=2139423: Wed Jul 24 19:58:29 2024 00:22:38.057 read: IOPS=9122, BW=143MiB/s (149MB/s)(287MiB/2011msec) 00:22:38.057 slat (usec): min=2, max=100, avg= 2.87, stdev= 1.39 00:22:38.057 clat (usec): min=3117, max=45213, avg=8640.65, stdev=4013.25 00:22:38.057 lat (usec): min=3120, max=45216, avg=8643.52, stdev=4013.65 00:22:38.057 clat percentiles (usec): 00:22:38.057 | 1.00th=[ 4047], 5.00th=[ 4883], 10.00th=[ 5342], 20.00th=[ 6128], 00:22:38.057 | 30.00th=[ 6718], 40.00th=[ 7308], 50.00th=[ 7898], 60.00th=[ 8586], 00:22:38.057 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[11731], 95.00th=[14222], 00:22:38.057 | 99.00th=[28967], 99.50th=[30278], 99.90th=[34866], 99.95th=[35390], 00:22:38.057 | 99.99th=[40109] 00:22:38.057 bw ( KiB/s): min=66240, max=84512, per=49.10%, avg=71664.00, stdev=8633.22, samples=4 00:22:38.057 iops : min= 4140, max= 5282, avg=4479.00, stdev=539.58, samples=4 00:22:38.057 write: IOPS=5547, BW=86.7MiB/s (90.9MB/s)(147MiB/1694msec); 0 zone resets 00:22:38.057 slat (usec): min=30, max=254, avg=31.92, stdev= 5.99 00:22:38.057 clat (usec): min=4605, max=36505, avg=9495.79, stdev=4050.60 00:22:38.057 lat (usec): min=4636, max=36538, avg=9527.71, stdev=4053.04 00:22:38.057 clat percentiles (usec): 00:22:38.057 | 1.00th=[ 6194], 5.00th=[ 6718], 10.00th=[ 7046], 20.00th=[ 7570], 00:22:38.057 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9110], 00:22:38.057 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[11076], 95.00th=[12649], 00:22:38.057 | 99.00th=[30802], 99.50th=[33817], 99.90th=[35390], 99.95th=[35914], 00:22:38.057 | 99.99th=[36439] 00:22:38.057 bw ( KiB/s): min=68512, max=88064, per=84.09%, avg=74632.00, stdev=9096.12, samples=4 00:22:38.057 iops : min= 4282, max= 5504, avg=4664.50, stdev=568.51, samples=4 00:22:38.057 lat (msec) : 4=0.55%, 10=78.81%, 20=17.70%, 50=2.94% 00:22:38.057 cpu : usr=83.43%, sys=12.19%, ctx=22, majf=0, minf=2 00:22:38.057 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:38.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:38.057 issued rwts: total=18346,9397,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.057 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:38.057 00:22:38.057 Run status group 0 (all jobs): 00:22:38.057 READ: bw=143MiB/s (149MB/s), 143MiB/s-143MiB/s (149MB/s-149MB/s), io=287MiB (301MB), run=2011-2011msec 00:22:38.057 WRITE: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=147MiB (154MB), run=1694-1694msec 00:22:38.057 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:38.057 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:38.057 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:38.057 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:38.057 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:38.057 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:38.057 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:38.057 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:38.315 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:38.315 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:38.315 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:38.315 rmmod nvme_tcp 00:22:38.315 rmmod nvme_fabrics 00:22:38.315 rmmod nvme_keyring 00:22:38.315 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:38.315 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:38.315 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:38.315 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2138316 ']' 00:22:38.315 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2138316 00:22:38.315 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2138316 ']' 00:22:38.315 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2138316 00:22:38.315 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:22:38.316 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:38.316 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2138316 00:22:38.316 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:38.316 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:38.316 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2138316' 00:22:38.316 killing process with pid 2138316 00:22:38.316 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2138316 00:22:38.316 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2138316 00:22:38.575 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:38.575 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:38.575 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:38.575 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:38.575 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:38.575 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.575 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.575 19:58:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.483 19:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:40.483 00:22:40.483 real 0m15.070s 00:22:40.483 user 0m46.990s 00:22:40.483 sys 0m5.695s 00:22:40.483 19:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:40.483 19:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.483 ************************************ 00:22:40.483 END TEST nvmf_fio_host 00:22:40.483 ************************************ 00:22:40.483 19:58:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:40.483 19:58:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:40.483 19:58:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:40.483 19:58:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.743 ************************************ 00:22:40.743 START TEST nvmf_failover 00:22:40.743 ************************************ 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:40.743 * Looking for test storage... 00:22:40.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:40.743 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:40.744 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:40.744 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:40.744 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:40.744 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.744 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:40.744 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:40.744 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:40.744 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.744 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.744 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.744 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:40.744 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:40.744 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:40.744 19:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.153 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:46.154 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:46.154 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:46.154 Found net devices under 0000:86:00.0: cvl_0_0 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:46.154 Found net devices under 0000:86:00.1: cvl_0_1 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:46.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:22:46.154 00:22:46.154 --- 10.0.0.2 ping statistics --- 00:22:46.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.154 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.382 ms 00:22:46.154 00:22:46.154 --- 10.0.0.1 ping statistics --- 00:22:46.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.154 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:46.154 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.155 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:46.155 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:46.155 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:46.155 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:46.155 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:46.155 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:46.155 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2143171 00:22:46.155 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2143171 00:22:46.155 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2143171 ']' 00:22:46.155 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.155 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:46.155 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.155 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:46.155 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:46.155 19:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:46.155 [2024-07-24 19:58:37.395141] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:22:46.155 [2024-07-24 19:58:37.395187] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.155 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.155 [2024-07-24 19:58:37.452296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:46.155 [2024-07-24 19:58:37.531205] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.155 [2024-07-24 19:58:37.531239] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.155 [2024-07-24 19:58:37.531246] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.155 [2024-07-24 19:58:37.531252] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.155 [2024-07-24 19:58:37.531257] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.155 [2024-07-24 19:58:37.531352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.155 [2024-07-24 19:58:37.531368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:46.155 [2024-07-24 19:58:37.531370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.722 19:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:46.722 19:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:46.722 19:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:46.722 19:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:46.722 19:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:46.722 19:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.722 19:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:46.981 [2024-07-24 19:58:38.395109] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.981 19:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:47.239 Malloc0 00:22:47.239 19:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:47.239 19:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:47.498 19:58:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:47.756 [2024-07-24 19:58:39.141339] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.756 19:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:47.756 [2024-07-24 19:58:39.337926] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:48.015 19:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:48.015 [2024-07-24 19:58:39.526575] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:48.015 19:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2143646 00:22:48.015 19:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:48.015 19:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:48.015 19:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2143646 /var/tmp/bdevperf.sock 00:22:48.015 19:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2143646 ']' 00:22:48.015 19:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.015 19:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:48.015 19:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.015 19:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:48.015 19:58:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:48.955 19:58:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:48.955 19:58:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:48.955 19:58:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:49.215 NVMe0n1 00:22:49.215 19:58:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:49.474 00:22:49.474 19:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2143881 00:22:49.474 19:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:49.474 19:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:50.856 19:58:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.856 [2024-07-24 19:58:42.193342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.856 [2024-07-24 19:58:42.193523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.857 [2024-07-24 19:58:42.193529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.857 [2024-07-24 19:58:42.193535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.857 [2024-07-24 19:58:42.193547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.857 [2024-07-24 19:58:42.193553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.857 [2024-07-24 19:58:42.193559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.857 [2024-07-24 19:58:42.193565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.857 [2024-07-24 19:58:42.193571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x768f50 is same with the state(5) to be set 00:22:50.857 19:58:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:54.148 19:58:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:54.148 00:22:54.149 19:58:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:54.149 [2024-07-24 19:58:45.650846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769d70 is same with the state(5) to be set 00:22:54.149 [2024-07-24 19:58:45.650889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769d70 is same with the state(5) to be set 00:22:54.149 [2024-07-24 19:58:45.650897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769d70 is same with the state(5) to be set 00:22:54.149 [2024-07-24 19:58:45.650903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769d70 is same with the state(5) to be set 00:22:54.149 [2024-07-24 19:58:45.650909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769d70 is same with the state(5) to be set 00:22:54.149 [2024-07-24 19:58:45.650916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769d70 is same with the state(5) to be set 00:22:54.149 [2024-07-24 19:58:45.650921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769d70 is same with the state(5) to be set 00:22:54.149 [2024-07-24 19:58:45.650927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769d70 is same with the state(5) to be set 00:22:54.149 [2024-07-24 19:58:45.650933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769d70 is same with the state(5) to be set 00:22:54.149 [2024-07-24 19:58:45.650939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769d70 is same with the state(5) to be set 00:22:54.149 [2024-07-24 19:58:45.650945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769d70 is same with the state(5) to be set 00:22:54.149 [2024-07-24 19:58:45.650951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769d70 is same with the state(5) to be set 00:22:54.149 [2024-07-24 19:58:45.650957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769d70 is same with the state(5) to be set 00:22:54.149 [2024-07-24 19:58:45.650963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769d70 is same with the state(5) to be set 00:22:54.149 [2024-07-24 19:58:45.650969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x769d70 is same with the state(5) to be set 00:22:54.149 19:58:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:57.447 19:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.447 [2024-07-24 19:58:48.859618] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.447 19:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:58.387 19:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:58.647 [2024-07-24 19:58:50.061832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x923b80 is same with the state(5) to be set 00:22:58.647 [2024-07-24 19:58:50.061878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x923b80 is same with the state(5) to be set 00:22:58.647 [2024-07-24 19:58:50.061886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x923b80 is same with the state(5) to be set 00:22:58.647 [2024-07-24 19:58:50.061892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x923b80 is same with the state(5) to be set 00:22:58.647 [2024-07-24 19:58:50.061899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x923b80 is same with the state(5) to be set 00:22:58.647 [2024-07-24 19:58:50.061906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x923b80 is same with the state(5) to be set 00:22:58.647 [2024-07-24 19:58:50.061912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x923b80 is same with the state(5) to be set 00:22:58.647 [2024-07-24 19:58:50.061917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x923b80 is same with the state(5) to be set 00:22:58.647 [2024-07-24 19:58:50.061923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x923b80 is same with the state(5) to be set 00:22:58.647 [2024-07-24 19:58:50.061929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x923b80 is same with the state(5) to be set 00:22:58.647 [2024-07-24 19:58:50.061935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x923b80 is same with the state(5) to be set 00:22:58.647 [2024-07-24 19:58:50.061940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x923b80 is same with the state(5) to be set 00:22:58.647 [2024-07-24 19:58:50.061946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x923b80 is same with the state(5) to be set 00:22:58.647 [2024-07-24 19:58:50.061952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x923b80 is same with the state(5) to be set 00:22:58.647 19:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2143881 00:23:05.227 0 00:23:05.227 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2143646 00:23:05.227 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2143646 ']' 00:23:05.227 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2143646 00:23:05.227 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:05.228 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:05.228 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2143646 00:23:05.228 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:05.228 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:05.228 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2143646' 00:23:05.228 killing process with pid 2143646 00:23:05.228 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2143646 00:23:05.228 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2143646 00:23:05.228 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:05.228 [2024-07-24 19:58:39.601621] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:23:05.228 [2024-07-24 19:58:39.601673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143646 ] 00:23:05.228 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.228 [2024-07-24 19:58:39.655693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.228 [2024-07-24 19:58:39.730656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.228 Running I/O for 15 seconds... 00:23:05.228 [2024-07-24 19:58:42.194156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.228 [2024-07-24 19:58:42.194423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.228 [2024-07-24 19:58:42.194438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.228 [2024-07-24 19:58:42.194453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.228 [2024-07-24 19:58:42.194467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.228 [2024-07-24 19:58:42.194481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.228 [2024-07-24 19:58:42.194496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.228 [2024-07-24 19:58:42.194510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.228 [2024-07-24 19:58:42.194524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.228 [2024-07-24 19:58:42.194641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.228 [2024-07-24 19:58:42.194649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.228 [2024-07-24 19:58:42.194655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.194987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.229 [2024-07-24 19:58:42.194994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.195006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.229 [2024-07-24 19:58:42.195013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.195021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.229 [2024-07-24 19:58:42.195028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.195036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.229 [2024-07-24 19:58:42.195048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.195057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.229 [2024-07-24 19:58:42.195064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.195072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.229 [2024-07-24 19:58:42.195078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.195086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.229 [2024-07-24 19:58:42.195093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.195103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.229 [2024-07-24 19:58:42.195109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.195117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.229 [2024-07-24 19:58:42.195124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.195132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.229 [2024-07-24 19:58:42.195138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.195146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.229 [2024-07-24 19:58:42.195153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.195161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.229 [2024-07-24 19:58:42.195167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.195175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.229 [2024-07-24 19:58:42.195182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.195189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.229 [2024-07-24 19:58:42.195196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.229 [2024-07-24 19:58:42.195204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.230 [2024-07-24 19:58:42.195211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.230 [2024-07-24 19:58:42.195225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.230 [2024-07-24 19:58:42.195240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.230 [2024-07-24 19:58:42.195599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.230 [2024-07-24 19:58:42.195613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.230 [2024-07-24 19:58:42.195630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.230 [2024-07-24 19:58:42.195645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.230 [2024-07-24 19:58:42.195660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.230 [2024-07-24 19:58:42.195676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.230 [2024-07-24 19:58:42.195691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.230 [2024-07-24 19:58:42.195705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.230 [2024-07-24 19:58:42.195720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.230 [2024-07-24 19:58:42.195735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.230 [2024-07-24 19:58:42.195750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.230 [2024-07-24 19:58:42.195764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.230 [2024-07-24 19:58:42.195772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.231 [2024-07-24 19:58:42.195779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.195788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.231 [2024-07-24 19:58:42.195795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.195803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.231 [2024-07-24 19:58:42.195809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.195817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.231 [2024-07-24 19:58:42.195824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.195832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.231 [2024-07-24 19:58:42.195838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.195846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:42.195854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.195862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:42.195869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.195877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:42.195884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.195892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:42.195898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.195906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:42.195913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.195921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:42.195927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.195935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:42.195941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.195949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:42.195956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.195964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:42.195970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.195978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:42.195984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.195992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:42.195999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.196007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:42.196014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.196023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:42.196029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.196039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:42.196049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.196057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:42.196063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.196071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:42.196078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.196098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.231 [2024-07-24 19:58:42.196104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.231 [2024-07-24 19:58:42.196110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103320 len:8 PRP1 0x0 PRP2 0x0 00:23:05.231 [2024-07-24 19:58:42.196118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.196159] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xebc4b0 was disconnected and freed. reset controller. 00:23:05.231 [2024-07-24 19:58:42.196168] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:05.231 [2024-07-24 19:58:42.196188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.231 [2024-07-24 19:58:42.196196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.196203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.231 [2024-07-24 19:58:42.196209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.196216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.231 [2024-07-24 19:58:42.196223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.196230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.231 [2024-07-24 19:58:42.196237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:42.196243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:05.231 [2024-07-24 19:58:42.199084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:05.231 [2024-07-24 19:58:42.199113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec9540 (9): Bad file descriptor 00:23:05.231 [2024-07-24 19:58:42.270846] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:05.231 [2024-07-24 19:58:45.651450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:45.651484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:45.651497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:28656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:45.651510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:45.651518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:45.651526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:45.651534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:45.651541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:45.651549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:45.651555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:45.651563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:45.651569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:45.651577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:45.651584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:45.651592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.231 [2024-07-24 19:58:45.651598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:45.651606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.231 [2024-07-24 19:58:45.651613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.231 [2024-07-24 19:58:45.651621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.651988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.651994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.652002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.652009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.652017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.652024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.652032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.652038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.652051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.232 [2024-07-24 19:58:45.652058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.232 [2024-07-24 19:58:45.652066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.233 [2024-07-24 19:58:45.652074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:28744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:28840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:28848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:28864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.233 [2024-07-24 19:58:45.652441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.233 [2024-07-24 19:58:45.652456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.233 [2024-07-24 19:58:45.652470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.233 [2024-07-24 19:58:45.652485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.233 [2024-07-24 19:58:45.652499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.233 [2024-07-24 19:58:45.652514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.233 [2024-07-24 19:58:45.652528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.233 [2024-07-24 19:58:45.652542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:28904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.233 [2024-07-24 19:58:45.652622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.233 [2024-07-24 19:58:45.652629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.234 [2024-07-24 19:58:45.652644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.234 [2024-07-24 19:58:45.652658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:28480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.652986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.652995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.653003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.653009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.653017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.653023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.653031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.653038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.653050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.653057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.653065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.653072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.653081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.653087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.653095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.653102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.653110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.653116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.653124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.234 [2024-07-24 19:58:45.653131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.653138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.234 [2024-07-24 19:58:45.653146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.653154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.234 [2024-07-24 19:58:45.653161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.653169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.234 [2024-07-24 19:58:45.653175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.234 [2024-07-24 19:58:45.653185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.234 [2024-07-24 19:58:45.653191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:45.653206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:45.653220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:45.653235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:45.653249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:45.653264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:45.653278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:45.653292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:45.653307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:45.653321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:45.653335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:45.653350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.235 [2024-07-24 19:58:45.653373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.235 [2024-07-24 19:58:45.653384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29088 len:8 PRP1 0x0 PRP2 0x0 00:23:05.235 [2024-07-24 19:58:45.653392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653431] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xeed3f0 was disconnected and freed. reset controller. 00:23:05.235 [2024-07-24 19:58:45.653440] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:05.235 [2024-07-24 19:58:45.653460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.235 [2024-07-24 19:58:45.653468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.235 [2024-07-24 19:58:45.653482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.235 [2024-07-24 19:58:45.653495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.235 [2024-07-24 19:58:45.653509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:45.653515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:05.235 [2024-07-24 19:58:45.656346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:05.235 [2024-07-24 19:58:45.656376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec9540 (9): Bad file descriptor 00:23:05.235 [2024-07-24 19:58:45.816827] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:05.235 [2024-07-24 19:58:50.062261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:50.062297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:50.062319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:50.062336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.235 [2024-07-24 19:58:50.062351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.235 [2024-07-24 19:58:50.062366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.235 [2024-07-24 19:58:50.062386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.235 [2024-07-24 19:58:50.062401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.235 [2024-07-24 19:58:50.062417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.235 [2024-07-24 19:58:50.062431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.235 [2024-07-24 19:58:50.062446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.235 [2024-07-24 19:58:50.062460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:50.062475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:50.062489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:50.062503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:50.062518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:50.062533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:50.062548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.235 [2024-07-24 19:58:50.062563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.235 [2024-07-24 19:58:50.062572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.235 [2024-07-24 19:58:50.062579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.062991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.062998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.063006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.063014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.063020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.063029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.236 [2024-07-24 19:58:50.063035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.063049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.236 [2024-07-24 19:58:50.063056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.063065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.236 [2024-07-24 19:58:50.063072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.063080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.236 [2024-07-24 19:58:50.063086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.063094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.236 [2024-07-24 19:58:50.063100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.063108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.236 [2024-07-24 19:58:50.063115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.063123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.236 [2024-07-24 19:58:50.063131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.236 [2024-07-24 19:58:50.063139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.236 [2024-07-24 19:58:50.063145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.237 [2024-07-24 19:58:50.063160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.237 [2024-07-24 19:58:50.063174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.237 [2024-07-24 19:58:50.063189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.237 [2024-07-24 19:58:50.063204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.237 [2024-07-24 19:58:50.063218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.237 [2024-07-24 19:58:50.063232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.237 [2024-07-24 19:58:50.063247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.237 [2024-07-24 19:58:50.063262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.237 [2024-07-24 19:58:50.063277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.237 [2024-07-24 19:58:50.063616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.237 [2024-07-24 19:58:50.063624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.063986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.063993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.064001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.064008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.064016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.064022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.064030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.064038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.064050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.064056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.064064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.064072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.064081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.064087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.064095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.238 [2024-07-24 19:58:50.064101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.064110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.238 [2024-07-24 19:58:50.064116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.064124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.238 [2024-07-24 19:58:50.064130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.064139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.238 [2024-07-24 19:58:50.064145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.064153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.238 [2024-07-24 19:58:50.064159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.064167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.238 [2024-07-24 19:58:50.064174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.064182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.238 [2024-07-24 19:58:50.064190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.238 [2024-07-24 19:58:50.064207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.238 [2024-07-24 19:58:50.064214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.239 [2024-07-24 19:58:50.064220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71600 len:8 PRP1 0x0 PRP2 0x0 00:23:05.239 [2024-07-24 19:58:50.064229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.239 [2024-07-24 19:58:50.064270] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xeed0b0 was disconnected and freed. reset controller. 00:23:05.239 [2024-07-24 19:58:50.064279] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:05.239 [2024-07-24 19:58:50.064298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.239 [2024-07-24 19:58:50.064305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.239 [2024-07-24 19:58:50.064313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.239 [2024-07-24 19:58:50.064320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.239 [2024-07-24 19:58:50.064332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.239 [2024-07-24 19:58:50.064338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.239 [2024-07-24 19:58:50.064345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.239 [2024-07-24 19:58:50.064352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.239 [2024-07-24 19:58:50.064358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:05.239 [2024-07-24 19:58:50.064391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec9540 (9): Bad file descriptor 00:23:05.239 [2024-07-24 19:58:50.067235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:05.239 [2024-07-24 19:58:50.189321] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:05.239 00:23:05.239 Latency(us) 00:23:05.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.239 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:05.239 Verification LBA range: start 0x0 length 0x4000 00:23:05.239 NVMe0n1 : 15.01 10916.61 42.64 1102.88 0.00 10626.60 1495.93 28493.91 00:23:05.239 =================================================================================================================== 00:23:05.239 Total : 10916.61 42.64 1102.88 0.00 10626.60 1495.93 28493.91 00:23:05.239 Received shutdown signal, test time was about 15.000000 seconds 00:23:05.239 00:23:05.239 Latency(us) 00:23:05.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.239 =================================================================================================================== 00:23:05.239 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.239 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:05.239 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:05.239 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:05.239 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2146381 00:23:05.239 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:05.239 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2146381 /var/tmp/bdevperf.sock 00:23:05.239 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2146381 ']' 00:23:05.239 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.239 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:05.239 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.239 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:05.239 19:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:05.809 19:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:05.809 19:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:05.809 19:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:05.809 [2024-07-24 19:58:57.402058] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:06.069 19:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:06.069 [2024-07-24 19:58:57.586567] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:06.069 19:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:06.330 NVMe0n1 00:23:06.330 19:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:06.662 00:23:06.662 19:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.232 00:23:07.232 19:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:07.232 19:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:07.232 19:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.492 19:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:10.788 19:59:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:10.788 19:59:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:10.788 19:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2147329 00:23:10.788 19:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:10.788 19:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2147329 00:23:11.727 0 00:23:11.727 19:59:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:11.727 [2024-07-24 19:58:56.450410] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:23:11.727 [2024-07-24 19:58:56.450464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146381 ] 00:23:11.727 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.727 [2024-07-24 19:58:56.506482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.727 [2024-07-24 19:58:56.576104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.727 [2024-07-24 19:58:58.931643] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:11.727 [2024-07-24 19:58:58.931687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.727 [2024-07-24 19:58:58.931699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.727 [2024-07-24 19:58:58.931708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.727 [2024-07-24 19:58:58.931715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.727 [2024-07-24 19:58:58.931722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.727 [2024-07-24 19:58:58.931729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.727 [2024-07-24 19:58:58.931736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.727 [2024-07-24 19:58:58.931743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.727 [2024-07-24 19:58:58.931750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.727 [2024-07-24 19:58:58.931774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.727 [2024-07-24 19:58:58.931788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248a540 (9): Bad file descriptor 00:23:11.727 [2024-07-24 19:58:58.942868] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:11.727 Running I/O for 1 seconds... 00:23:11.727 00:23:11.727 Latency(us) 00:23:11.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.727 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:11.727 Verification LBA range: start 0x0 length 0x4000 00:23:11.727 NVMe0n1 : 1.01 10140.88 39.61 0.00 0.00 12570.50 1937.59 27468.13 00:23:11.728 =================================================================================================================== 00:23:11.728 Total : 10140.88 39.61 0.00 0.00 12570.50 1937.59 27468.13 00:23:11.728 19:59:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:11.728 19:59:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:11.987 19:59:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:12.246 19:59:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:12.246 19:59:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:12.506 19:59:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:12.506 19:59:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:15.800 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:15.800 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:15.800 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2146381 00:23:15.800 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2146381 ']' 00:23:15.800 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2146381 00:23:15.800 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:15.800 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:15.800 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2146381 00:23:15.800 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:15.800 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:15.800 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2146381' 00:23:15.800 killing process with pid 2146381 00:23:15.800 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2146381 00:23:15.800 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2146381 00:23:16.060 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:16.060 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:16.060 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:16.060 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:16.060 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:16.060 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:16.060 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:16.060 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:16.060 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:16.060 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:16.060 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:16.060 rmmod nvme_tcp 00:23:16.320 rmmod nvme_fabrics 00:23:16.320 rmmod nvme_keyring 00:23:16.320 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:16.320 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:16.320 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:16.320 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2143171 ']' 00:23:16.320 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2143171 00:23:16.320 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2143171 ']' 00:23:16.320 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2143171 00:23:16.320 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:16.320 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:16.320 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2143171 00:23:16.320 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:16.320 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:16.320 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2143171' 00:23:16.320 killing process with pid 2143171 00:23:16.320 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2143171 00:23:16.320 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2143171 00:23:16.580 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:16.580 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:16.580 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:16.580 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:16.580 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:16.580 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.580 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.580 19:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.487 19:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:18.487 00:23:18.487 real 0m37.922s 00:23:18.487 user 2m3.215s 00:23:18.487 sys 0m7.285s 00:23:18.487 19:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:18.487 19:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:18.487 ************************************ 00:23:18.487 END TEST nvmf_failover 00:23:18.487 ************************************ 00:23:18.487 19:59:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:18.487 19:59:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:18.487 19:59:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:18.487 19:59:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.487 ************************************ 00:23:18.487 START TEST nvmf_host_discovery 00:23:18.487 ************************************ 00:23:18.487 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:18.746 * Looking for test storage... 00:23:18.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:18.746 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.746 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:18.746 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.746 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.746 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.746 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.746 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.746 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.746 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.746 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.746 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.746 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:18.747 19:59:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.028 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.028 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:24.028 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:24.029 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:24.029 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:24.029 Found net devices under 0000:86:00.0: cvl_0_0 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:24.029 Found net devices under 0000:86:00.1: cvl_0_1 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:24.029 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:24.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:23:24.030 00:23:24.030 --- 10.0.0.2 ping statistics --- 00:23:24.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.030 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:23:24.030 00:23:24.030 --- 10.0.0.1 ping statistics --- 00:23:24.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.030 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2151553 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2151553 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2151553 ']' 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:24.030 19:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.030 [2024-07-24 19:59:15.557152] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:23:24.030 [2024-07-24 19:59:15.557193] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.030 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.030 [2024-07-24 19:59:15.609721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.290 [2024-07-24 19:59:15.687778] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.290 [2024-07-24 19:59:15.687815] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.290 [2024-07-24 19:59:15.687823] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.290 [2024-07-24 19:59:15.687829] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.290 [2024-07-24 19:59:15.687835] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.290 [2024-07-24 19:59:15.687851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.860 [2024-07-24 19:59:16.418057] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.860 [2024-07-24 19:59:16.430191] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.860 null0 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.860 null1 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.860 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.120 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.120 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:25.120 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2151798 00:23:25.120 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2151798 /tmp/host.sock 00:23:25.120 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2151798 ']' 00:23:25.120 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:23:25.120 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.120 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:25.120 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:25.120 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.120 19:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.120 [2024-07-24 19:59:16.492114] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:23:25.120 [2024-07-24 19:59:16.492155] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151798 ] 00:23:25.120 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.120 [2024-07-24 19:59:16.545922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.120 [2024-07-24 19:59:16.618481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:26.060 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.061 [2024-07-24 19:59:17.613311] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:26.061 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:23:26.322 19:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:26.892 [2024-07-24 19:59:18.357253] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:26.892 [2024-07-24 19:59:18.357272] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:26.892 [2024-07-24 19:59:18.357286] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:26.892 [2024-07-24 19:59:18.486779] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:27.153 [2024-07-24 19:59:18.630508] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:27.153 [2024-07-24 19:59:18.630527] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:27.413 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.414 19:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.414 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.414 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:27.414 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:27.414 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:27.414 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:27.414 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:27.414 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:27.713 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.714 [2024-07-24 19:59:19.105365] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:27.714 [2024-07-24 19:59:19.105976] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:27.714 [2024-07-24 19:59:19.105997] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:27.714 [2024-07-24 19:59:19.233728] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:27.714 19:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:27.714 [2024-07-24 19:59:19.292496] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:27.714 [2024-07-24 19:59:19.292512] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:27.714 [2024-07-24 19:59:19.292517] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:29.098 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.099 [2024-07-24 19:59:20.373561] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:29.099 [2024-07-24 19:59:20.373587] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:29.099 [2024-07-24 19:59:20.382391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.099 [2024-07-24 19:59:20.382413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.099 [2024-07-24 19:59:20.382422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.099 [2024-07-24 19:59:20.382429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.099 [2024-07-24 19:59:20.382437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.099 [2024-07-24 19:59:20.382444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.099 [2024-07-24 19:59:20.382451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.099 [2024-07-24 19:59:20.382462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.099 [2024-07-24 19:59:20.382469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1929f30 is same with the state(5) to be set 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:29.099 [2024-07-24 19:59:20.392404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1929f30 (9): Bad file descriptor 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.099 [2024-07-24 19:59:20.402442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.099 [2024-07-24 19:59:20.402901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-07-24 19:59:20.402916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1929f30 with addr=10.0.0.2, port=4420 00:23:29.099 [2024-07-24 19:59:20.402924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1929f30 is same with the state(5) to be set 00:23:29.099 [2024-07-24 19:59:20.402935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1929f30 (9): Bad file descriptor 00:23:29.099 [2024-07-24 19:59:20.402951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:29.099 [2024-07-24 19:59:20.402958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:29.099 [2024-07-24 19:59:20.402966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:29.099 [2024-07-24 19:59:20.402976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.099 [2024-07-24 19:59:20.412497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.099 [2024-07-24 19:59:20.412872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-07-24 19:59:20.412884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1929f30 with addr=10.0.0.2, port=4420 00:23:29.099 [2024-07-24 19:59:20.412892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1929f30 is same with the state(5) to be set 00:23:29.099 [2024-07-24 19:59:20.412902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1929f30 (9): Bad file descriptor 00:23:29.099 [2024-07-24 19:59:20.412912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:29.099 [2024-07-24 19:59:20.412919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:29.099 [2024-07-24 19:59:20.412926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:29.099 [2024-07-24 19:59:20.412935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:29.099 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.099 [2024-07-24 19:59:20.422549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.099 [2024-07-24 19:59:20.423559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-07-24 19:59:20.423585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1929f30 with addr=10.0.0.2, port=4420 00:23:29.099 [2024-07-24 19:59:20.423595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1929f30 is same with the state(5) to be set 00:23:29.099 [2024-07-24 19:59:20.423609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1929f30 (9): Bad file descriptor 00:23:29.099 [2024-07-24 19:59:20.423630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:29.099 [2024-07-24 19:59:20.423637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:29.099 [2024-07-24 19:59:20.423644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:29.099 [2024-07-24 19:59:20.423656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.099 [2024-07-24 19:59:20.432608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.099 [2024-07-24 19:59:20.433121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-07-24 19:59:20.433135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1929f30 with addr=10.0.0.2, port=4420 00:23:29.099 [2024-07-24 19:59:20.433142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1929f30 is same with the state(5) to be set 00:23:29.099 [2024-07-24 19:59:20.433154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1929f30 (9): Bad file descriptor 00:23:29.099 [2024-07-24 19:59:20.433169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:29.099 [2024-07-24 19:59:20.433176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:29.099 [2024-07-24 19:59:20.433183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:29.099 [2024-07-24 19:59:20.433193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.099 [2024-07-24 19:59:20.442662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.099 [2024-07-24 19:59:20.443102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.099 [2024-07-24 19:59:20.443115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1929f30 with addr=10.0.0.2, port=4420 00:23:29.099 [2024-07-24 19:59:20.443122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1929f30 is same with the state(5) to be set 00:23:29.099 [2024-07-24 19:59:20.443135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1929f30 (9): Bad file descriptor 00:23:29.099 [2024-07-24 19:59:20.443145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:29.099 [2024-07-24 19:59:20.443150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:29.099 [2024-07-24 19:59:20.443157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:29.099 [2024-07-24 19:59:20.443166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.099 [2024-07-24 19:59:20.452712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.100 [2024-07-24 19:59:20.453154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.100 [2024-07-24 19:59:20.453166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1929f30 with addr=10.0.0.2, port=4420 00:23:29.100 [2024-07-24 19:59:20.453173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1929f30 is same with the state(5) to be set 00:23:29.100 [2024-07-24 19:59:20.453183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1929f30 (9): Bad file descriptor 00:23:29.100 [2024-07-24 19:59:20.453192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:29.100 [2024-07-24 19:59:20.453197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:29.100 [2024-07-24 19:59:20.453204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:29.100 [2024-07-24 19:59:20.453213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.100 [2024-07-24 19:59:20.461266] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:29.100 [2024-07-24 19:59:20.461281] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:29.100 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.361 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.361 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.361 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:29.361 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:29.361 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:29.361 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:29.361 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:29.361 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.361 19:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.301 [2024-07-24 19:59:21.755632] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:30.301 [2024-07-24 19:59:21.755653] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:30.301 [2024-07-24 19:59:21.755667] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:30.301 [2024-07-24 19:59:21.842931] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:30.561 [2024-07-24 19:59:22.034233] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:30.561 [2024-07-24 19:59:22.034259] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.561 request: 00:23:30.561 { 00:23:30.561 "name": "nvme", 00:23:30.561 "trtype": "tcp", 00:23:30.561 "traddr": "10.0.0.2", 00:23:30.561 "adrfam": "ipv4", 00:23:30.561 "trsvcid": "8009", 00:23:30.561 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:30.561 "wait_for_attach": true, 00:23:30.561 "method": "bdev_nvme_start_discovery", 00:23:30.561 "req_id": 1 00:23:30.561 } 00:23:30.561 Got JSON-RPC error response 00:23:30.561 response: 00:23:30.561 { 00:23:30.561 "code": -17, 00:23:30.561 "message": "File exists" 00:23:30.561 } 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:30.561 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:30.821 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.821 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:30.821 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.821 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:30.821 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.821 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.821 request: 00:23:30.821 { 00:23:30.821 "name": "nvme_second", 00:23:30.821 "trtype": "tcp", 00:23:30.821 "traddr": "10.0.0.2", 00:23:30.821 "adrfam": "ipv4", 00:23:30.821 "trsvcid": "8009", 00:23:30.821 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:30.821 "wait_for_attach": true, 00:23:30.821 "method": "bdev_nvme_start_discovery", 00:23:30.821 "req_id": 1 00:23:30.821 } 00:23:30.821 Got JSON-RPC error response 00:23:30.821 response: 00:23:30.821 { 00:23:30.821 "code": -17, 00:23:30.821 "message": "File exists" 00:23:30.821 } 00:23:30.821 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:30.821 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:30.821 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:30.821 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:30.821 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.822 19:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.762 [2024-07-24 19:59:23.283190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.762 [2024-07-24 19:59:23.283216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1925bb0 with addr=10.0.0.2, port=8010 00:23:31.762 [2024-07-24 19:59:23.283227] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:31.762 [2024-07-24 19:59:23.283234] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:31.762 [2024-07-24 19:59:23.283240] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:32.701 [2024-07-24 19:59:24.285736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.701 [2024-07-24 19:59:24.285759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1966ee0 with addr=10.0.0.2, port=8010 00:23:32.701 [2024-07-24 19:59:24.285770] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:32.701 [2024-07-24 19:59:24.285776] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:32.701 [2024-07-24 19:59:24.285781] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:34.082 [2024-07-24 19:59:25.287641] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:34.082 request: 00:23:34.082 { 00:23:34.082 "name": "nvme_second", 00:23:34.082 "trtype": "tcp", 00:23:34.082 "traddr": "10.0.0.2", 00:23:34.082 "adrfam": "ipv4", 00:23:34.082 "trsvcid": "8010", 00:23:34.082 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:34.082 "wait_for_attach": false, 00:23:34.082 "attach_timeout_ms": 3000, 00:23:34.082 "method": "bdev_nvme_start_discovery", 00:23:34.082 "req_id": 1 00:23:34.082 } 00:23:34.082 Got JSON-RPC error response 00:23:34.082 response: 00:23:34.082 { 00:23:34.082 "code": -110, 00:23:34.082 "message": "Connection timed out" 00:23:34.082 } 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2151798 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:34.082 rmmod nvme_tcp 00:23:34.082 rmmod nvme_fabrics 00:23:34.082 rmmod nvme_keyring 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2151553 ']' 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2151553 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2151553 ']' 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2151553 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2151553 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2151553' 00:23:34.082 killing process with pid 2151553 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2151553 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2151553 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.082 19:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:36.622 00:23:36.622 real 0m17.623s 00:23:36.622 user 0m22.202s 00:23:36.622 sys 0m5.335s 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.622 ************************************ 00:23:36.622 END TEST nvmf_host_discovery 00:23:36.622 ************************************ 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.622 ************************************ 00:23:36.622 START TEST nvmf_host_multipath_status 00:23:36.622 ************************************ 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:36.622 * Looking for test storage... 00:23:36.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:36.622 19:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:41.906 19:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:41.906 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.906 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:41.907 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:41.907 Found net devices under 0000:86:00.0: cvl_0_0 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:41.907 Found net devices under 0000:86:00.1: cvl_0_1 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:41.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:23:41.907 00:23:41.907 --- 10.0.0.2 ping statistics --- 00:23:41.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.907 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:23:41.907 00:23:41.907 --- 10.0.0.1 ping statistics --- 00:23:41.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.907 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2156868 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2156868 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2156868 ']' 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:41.907 19:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:41.907 [2024-07-24 19:59:33.360550] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:23:41.907 [2024-07-24 19:59:33.360591] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.907 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.907 [2024-07-24 19:59:33.417171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:41.907 [2024-07-24 19:59:33.495582] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.907 [2024-07-24 19:59:33.495621] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.907 [2024-07-24 19:59:33.495631] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.907 [2024-07-24 19:59:33.495637] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.907 [2024-07-24 19:59:33.495642] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.907 [2024-07-24 19:59:33.495699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.907 [2024-07-24 19:59:33.495702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.846 19:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:42.846 19:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:23:42.846 19:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:42.846 19:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:42.846 19:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:42.846 19:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.846 19:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2156868 00:23:42.846 19:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:42.846 [2024-07-24 19:59:34.347347] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.846 19:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:43.105 Malloc0 00:23:43.105 19:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:43.365 19:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:43.365 19:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.624 [2024-07-24 19:59:35.064797] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.624 19:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:43.884 [2024-07-24 19:59:35.237221] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:43.884 19:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:43.884 19:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2157132 00:23:43.884 19:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:43.884 19:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2157132 /var/tmp/bdevperf.sock 00:23:43.884 19:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2157132 ']' 00:23:43.884 19:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.884 19:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:43.884 19:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.884 19:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:43.884 19:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:44.143 19:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:44.144 19:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:23:44.144 19:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:44.144 19:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:44.714 Nvme0n1 00:23:44.714 19:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:44.973 Nvme0n1 00:23:44.973 19:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:44.973 19:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:47.567 19:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:47.567 19:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:47.567 19:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:47.567 19:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:48.576 19:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:48.576 19:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:48.576 19:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.576 19:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:48.576 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.576 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:48.576 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.576 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:48.835 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:48.835 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:48.835 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.835 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:48.835 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.835 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:48.835 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.835 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:49.094 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.094 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:49.094 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:49.094 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.353 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.353 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:49.353 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.353 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:49.613 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.613 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:49.613 19:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:49.872 19:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:49.872 19:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:51.251 19:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:51.251 19:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:51.251 19:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.251 19:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:51.251 19:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:51.251 19:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:51.251 19:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.251 19:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:51.251 19:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.251 19:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:51.251 19:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.251 19:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:51.510 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.510 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:51.510 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.510 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:51.770 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.770 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:51.770 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.770 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:52.029 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.029 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:52.029 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.029 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:52.029 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.029 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:52.029 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:52.288 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:52.548 19:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:53.486 19:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:53.486 19:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:53.486 19:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.486 19:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:53.746 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.746 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:53.746 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.746 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:53.746 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:53.746 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:53.746 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.746 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:54.006 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.006 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:54.006 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.006 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:54.266 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.266 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:54.266 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.266 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:54.525 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.525 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:54.525 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.525 19:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:54.525 19:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.525 19:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:54.526 19:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:54.786 19:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:55.045 19:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:55.983 19:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:55.983 19:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:55.983 19:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.983 19:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:56.242 19:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.242 19:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:56.242 19:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.242 19:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:56.501 19:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:56.501 19:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:56.501 19:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.501 19:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:56.501 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.501 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:56.501 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.501 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:56.761 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.761 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:56.761 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.761 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:57.020 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.020 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:57.020 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.020 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:57.020 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:57.020 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:57.020 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:57.280 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:57.540 19:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:58.480 19:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:58.480 19:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:58.480 19:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.480 19:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:58.740 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:58.741 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:58.741 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.741 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:59.002 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.002 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:59.002 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.002 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:59.002 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.002 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:59.002 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.002 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:59.262 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.262 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:59.262 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.262 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:59.522 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.522 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:59.522 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.522 19:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:59.522 19:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.522 19:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:59.522 19:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:59.782 19:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:00.041 19:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:01.102 19:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:01.102 19:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:01.102 19:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.102 19:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:01.102 19:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:01.102 19:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:01.102 19:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.102 19:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:01.361 19:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.362 19:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:01.362 19:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.362 19:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:01.621 19:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.621 19:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:01.621 19:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.621 19:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:01.621 19:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.621 19:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:01.621 19:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.621 19:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:01.881 19:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:01.881 19:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:01.881 19:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.881 19:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:02.140 19:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.140 19:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:02.400 19:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:02.400 19:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:02.400 19:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:02.660 19:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:03.598 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:03.598 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:03.598 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.598 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.859 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.859 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:03.859 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.859 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:04.119 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.119 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:04.119 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.119 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:04.119 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.119 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:04.119 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.119 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:04.378 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.378 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:04.378 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.378 19:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.638 19:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.638 19:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:04.638 19:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.638 19:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:04.898 19:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.898 19:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:04.898 19:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:04.898 19:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:05.158 19:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:06.096 19:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:06.096 19:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:06.096 19:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.096 19:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:06.356 19:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.356 19:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:06.356 19:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.357 19:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:06.617 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.617 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:06.617 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.617 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:06.617 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.617 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:06.617 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.617 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:06.877 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.877 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:06.877 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.877 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:07.137 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.137 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:07.137 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:07.137 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.397 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.397 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:07.397 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:07.397 19:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:07.657 19:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:08.597 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:08.597 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:08.597 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.597 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:08.857 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.857 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:08.857 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.857 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:09.117 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.117 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:09.117 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.117 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.117 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.117 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.117 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.117 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.377 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.377 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:09.377 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:09.377 20:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.637 20:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.637 20:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:09.637 20:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.637 20:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:09.896 20:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.896 20:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:09.896 20:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:09.896 20:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:10.154 20:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:11.086 20:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:11.086 20:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:11.086 20:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.086 20:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:11.344 20:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.344 20:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:11.344 20:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.344 20:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:11.604 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.604 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:11.604 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.604 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:11.604 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.604 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:11.863 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.863 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:11.863 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.863 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:11.863 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.863 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:12.123 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.123 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:12.123 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.123 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:12.383 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:12.383 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2157132 00:24:12.383 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2157132 ']' 00:24:12.383 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2157132 00:24:12.383 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:12.383 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.383 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2157132 00:24:12.383 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:12.383 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:12.383 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2157132' 00:24:12.383 killing process with pid 2157132 00:24:12.383 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2157132 00:24:12.383 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2157132 00:24:12.383 Connection closed with partial response: 00:24:12.383 00:24:12.383 00:24:12.646 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2157132 00:24:12.646 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:12.646 [2024-07-24 19:59:35.283952] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:24:12.646 [2024-07-24 19:59:35.284002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2157132 ] 00:24:12.646 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.646 [2024-07-24 19:59:35.334455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.646 [2024-07-24 19:59:35.407721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.646 Running I/O for 90 seconds... 00:24:12.646 [2024-07-24 19:59:48.774233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.774274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:12.646 [2024-07-24 19:59:48.776721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.646 [2024-07-24 19:59:48.776728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.776745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.776752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.776768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.776775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.776792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.776799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.776816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.776823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.776839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.776846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.776863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.776869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.776886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.776893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.776909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.776916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.776932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.776939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.776961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.776968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.776985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.776992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:12.647 [2024-07-24 19:59:48.777567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.647 [2024-07-24 19:59:48.777574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 19:59:48.777592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 19:59:48.777599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 19:59:48.777618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 19:59:48.777625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 19:59:48.777644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 19:59:48.777653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 19:59:48.777671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 19:59:48.777678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 19:59:48.777697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 19:59:48.777704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 19:59:48.777722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 19:59:48.777729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 19:59:48.777748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 19:59:48.777755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 19:59:48.777774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 19:59:48.777781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 19:59:48.777799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 19:59:48.777806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 19:59:48.777825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 19:59:48.777832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 19:59:48.777850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 19:59:48.777857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 19:59:48.777876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 19:59:48.777883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 19:59:48.777901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 19:59:48.777909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 19:59:48.777928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 19:59:48.777934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.622674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 20:00:01.622714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.622769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 20:00:01.622777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.622791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.648 [2024-07-24 20:00:01.622798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.622811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.648 [2024-07-24 20:00:01.622818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.622831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.648 [2024-07-24 20:00:01.622838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.622850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.648 [2024-07-24 20:00:01.622857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.622869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.648 [2024-07-24 20:00:01.622876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.622889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.648 [2024-07-24 20:00:01.622895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.622908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.648 [2024-07-24 20:00:01.622914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.622927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.648 [2024-07-24 20:00:01.622934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.622946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.648 [2024-07-24 20:00:01.622953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.622966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.648 [2024-07-24 20:00:01.622973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.622986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.648 [2024-07-24 20:00:01.622993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.623007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.648 [2024-07-24 20:00:01.623014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.623027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 20:00:01.623033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.623051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.648 [2024-07-24 20:00:01.623059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.623326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.648 [2024-07-24 20:00:01.623341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.623356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.648 [2024-07-24 20:00:01.623363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.623376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.648 [2024-07-24 20:00:01.623383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.623396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.648 [2024-07-24 20:00:01.623403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:12.648 [2024-07-24 20:00:01.623415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.649 [2024-07-24 20:00:01.623678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.649 [2024-07-24 20:00:01.623697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.623810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.623817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.624055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.624066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.624081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.624088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.624101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.649 [2024-07-24 20:00:01.624108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.624121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.649 [2024-07-24 20:00:01.624128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.624140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.649 [2024-07-24 20:00:01.624148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.624161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.624168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.624180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.624188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.624200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.624207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.624221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.624230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.624243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.649 [2024-07-24 20:00:01.624250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:12.649 [2024-07-24 20:00:01.624877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.650 [2024-07-24 20:00:01.624890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.624904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.650 [2024-07-24 20:00:01.624912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.624925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.650 [2024-07-24 20:00:01.624932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.624945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.650 [2024-07-24 20:00:01.624951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.624964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.650 [2024-07-24 20:00:01.624972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.624984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.650 [2024-07-24 20:00:01.624991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.625003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.650 [2024-07-24 20:00:01.625010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.625023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.650 [2024-07-24 20:00:01.625030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.625047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.650 [2024-07-24 20:00:01.625054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.625067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.650 [2024-07-24 20:00:01.625074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.625088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.650 [2024-07-24 20:00:01.625094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.625109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.650 [2024-07-24 20:00:01.625116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.625129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.650 [2024-07-24 20:00:01.625136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.625149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.650 [2024-07-24 20:00:01.625155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.625168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.650 [2024-07-24 20:00:01.625176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.625188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.650 [2024-07-24 20:00:01.625195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.625207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.650 [2024-07-24 20:00:01.625214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.625226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.650 [2024-07-24 20:00:01.625234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.625246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.650 [2024-07-24 20:00:01.625253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.625265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.650 [2024-07-24 20:00:01.625271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:12.650 [2024-07-24 20:00:01.625285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:12.650 [2024-07-24 20:00:01.625292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:12.650 Received shutdown signal, test time was about 27.173823 seconds 00:24:12.650 00:24:12.650 Latency(us) 00:24:12.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.650 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:12.650 Verification LBA range: start 0x0 length 0x4000 00:24:12.650 Nvme0n1 : 27.17 10520.28 41.09 0.00 0.00 12144.95 541.38 3034487.76 00:24:12.650 =================================================================================================================== 00:24:12.650 Total : 10520.28 41.09 0.00 0.00 12144.95 541.38 3034487.76 00:24:12.650 20:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:12.650 rmmod nvme_tcp 00:24:12.650 rmmod nvme_fabrics 00:24:12.650 rmmod nvme_keyring 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2156868 ']' 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2156868 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2156868 ']' 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2156868 00:24:12.650 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:12.920 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.920 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2156868 00:24:12.920 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:12.920 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:12.920 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2156868' 00:24:12.920 killing process with pid 2156868 00:24:12.920 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2156868 00:24:12.921 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2156868 00:24:12.921 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.921 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.921 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.921 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.921 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.921 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.921 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.921 20:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:15.462 00:24:15.462 real 0m38.789s 00:24:15.462 user 1m44.847s 00:24:15.462 sys 0m10.588s 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:15.462 ************************************ 00:24:15.462 END TEST nvmf_host_multipath_status 00:24:15.462 ************************************ 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.462 ************************************ 00:24:15.462 START TEST nvmf_discovery_remove_ifc 00:24:15.462 ************************************ 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:15.462 * Looking for test storage... 00:24:15.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.462 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:15.463 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:15.463 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:15.463 20:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:20.742 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:20.742 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:20.742 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:20.743 Found net devices under 0000:86:00.0: cvl_0_0 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:20.743 Found net devices under 0000:86:00.1: cvl_0_1 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:20.743 20:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:20.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:24:20.743 00:24:20.743 --- 10.0.0.2 ping statistics --- 00:24:20.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.743 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.441 ms 00:24:20.743 00:24:20.743 --- 10.0.0.1 ping statistics --- 00:24:20.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.743 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2165938 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2165938 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2165938 ']' 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:20.743 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.743 [2024-07-24 20:00:12.186950] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:24:20.743 [2024-07-24 20:00:12.186998] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.743 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.743 [2024-07-24 20:00:12.245551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.743 [2024-07-24 20:00:12.324867] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.743 [2024-07-24 20:00:12.324904] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.743 [2024-07-24 20:00:12.324911] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.743 [2024-07-24 20:00:12.324917] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.743 [2024-07-24 20:00:12.324922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.743 [2024-07-24 20:00:12.324938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.680 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:21.680 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:21.680 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:21.680 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:21.680 20:00:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.680 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.680 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:21.680 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.680 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.680 [2024-07-24 20:00:13.036284] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.680 [2024-07-24 20:00:13.044410] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:21.680 null0 00:24:21.680 [2024-07-24 20:00:13.076427] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.680 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.680 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2166183 00:24:21.680 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:21.681 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2166183 /tmp/host.sock 00:24:21.681 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2166183 ']' 00:24:21.681 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:21.681 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:21.681 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:21.681 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:21.681 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:21.681 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.681 [2024-07-24 20:00:13.144909] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:24:21.681 [2024-07-24 20:00:13.144948] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166183 ] 00:24:21.681 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.681 [2024-07-24 20:00:13.198488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.939 [2024-07-24 20:00:13.277872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.513 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:22.513 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:22.513 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:22.513 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:22.513 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.513 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:22.513 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.513 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:22.513 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.513 20:00:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:22.513 20:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.513 20:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:22.513 20:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.513 20:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.889 [2024-07-24 20:00:15.107396] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:23.889 [2024-07-24 20:00:15.107415] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:23.889 [2024-07-24 20:00:15.107430] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:23.889 [2024-07-24 20:00:15.234828] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:23.889 [2024-07-24 20:00:15.338175] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:23.889 [2024-07-24 20:00:15.338217] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:23.889 [2024-07-24 20:00:15.338237] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:23.889 [2024-07-24 20:00:15.338249] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:23.889 [2024-07-24 20:00:15.338267] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:23.889 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.889 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:23.889 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:23.889 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.889 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:23.889 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.889 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:23.889 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.889 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:23.889 [2024-07-24 20:00:15.345369] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xcece60 was disconnected and freed. delete nvme_qpair. 00:24:23.889 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.889 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:23.889 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:23.889 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:24.148 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:24.148 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:24.148 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:24.148 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:24.148 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.148 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:24.148 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:24.148 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:24.148 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.148 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:24.148 20:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:25.104 20:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:25.104 20:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.104 20:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:25.104 20:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:25.104 20:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.104 20:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:25.104 20:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.104 20:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.104 20:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:25.104 20:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:26.042 20:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:26.042 20:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.042 20:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:26.042 20:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.042 20:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:26.042 20:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.042 20:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:26.042 20:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.302 20:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:26.302 20:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:27.241 20:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:27.241 20:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:27.241 20:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.241 20:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:27.241 20:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.241 20:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.241 20:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:27.241 20:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.241 20:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:27.241 20:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:28.239 20:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:28.239 20:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.239 20:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:28.239 20:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.239 20:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:28.239 20:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:28.239 20:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:28.239 20:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.239 20:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:28.239 20:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:29.176 20:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:29.176 20:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.176 20:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:29.176 20:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:29.176 20:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.176 20:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:29.176 20:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.435 20:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.435 [2024-07-24 20:00:20.779387] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:29.435 [2024-07-24 20:00:20.779428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.435 [2024-07-24 20:00:20.779438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.435 [2024-07-24 20:00:20.779464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.435 [2024-07-24 20:00:20.779472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.435 [2024-07-24 20:00:20.779480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.435 [2024-07-24 20:00:20.779487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.435 [2024-07-24 20:00:20.779498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.435 [2024-07-24 20:00:20.779505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.435 [2024-07-24 20:00:20.779512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.435 [2024-07-24 20:00:20.779518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.435 [2024-07-24 20:00:20.779525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb36b0 is same with the state(5) to be set 00:24:29.435 [2024-07-24 20:00:20.789406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb36b0 (9): Bad file descriptor 00:24:29.435 [2024-07-24 20:00:20.799443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:29.435 20:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:29.435 20:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:30.375 20:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:30.375 20:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:30.375 20:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:30.375 20:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:30.375 20:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.375 20:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:30.375 20:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.375 [2024-07-24 20:00:21.827059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:30.375 [2024-07-24 20:00:21.827096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb36b0 with addr=10.0.0.2, port=4420 00:24:30.375 [2024-07-24 20:00:21.827111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb36b0 is same with the state(5) to be set 00:24:30.375 [2024-07-24 20:00:21.827137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb36b0 (9): Bad file descriptor 00:24:30.375 [2024-07-24 20:00:21.827545] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:30.375 [2024-07-24 20:00:21.827572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:30.375 [2024-07-24 20:00:21.827581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:30.375 [2024-07-24 20:00:21.827592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:30.375 [2024-07-24 20:00:21.827610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.375 [2024-07-24 20:00:21.827620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:30.375 20:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.375 20:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:30.375 20:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:31.314 [2024-07-24 20:00:22.830103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:31.314 [2024-07-24 20:00:22.830127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:31.314 [2024-07-24 20:00:22.830136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:31.314 [2024-07-24 20:00:22.830147] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:31.314 [2024-07-24 20:00:22.830161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.314 [2024-07-24 20:00:22.830178] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:31.314 [2024-07-24 20:00:22.830200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.314 [2024-07-24 20:00:22.830209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.314 [2024-07-24 20:00:22.830218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.314 [2024-07-24 20:00:22.830225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.314 [2024-07-24 20:00:22.830232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.314 [2024-07-24 20:00:22.830239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.314 [2024-07-24 20:00:22.830246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.314 [2024-07-24 20:00:22.830252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.314 [2024-07-24 20:00:22.830260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.314 [2024-07-24 20:00:22.830267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.314 [2024-07-24 20:00:22.830273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:31.315 [2024-07-24 20:00:22.830389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb2a80 (9): Bad file descriptor 00:24:31.315 [2024-07-24 20:00:22.831401] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:31.315 [2024-07-24 20:00:22.831410] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:31.315 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:31.315 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.315 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:31.315 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.315 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:31.315 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.315 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:31.315 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.315 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:31.315 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.573 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.573 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:31.573 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:31.573 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:31.573 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.573 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:31.573 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.573 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.573 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:31.573 20:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.573 20:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:31.573 20:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:32.513 20:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:32.513 20:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:32.513 20:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:32.513 20:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:32.513 20:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.513 20:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:32.513 20:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.513 20:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.513 20:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:32.513 20:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:33.450 [2024-07-24 20:00:24.850322] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:33.450 [2024-07-24 20:00:24.850339] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:33.450 [2024-07-24 20:00:24.850353] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:33.450 [2024-07-24 20:00:24.936620] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:33.709 20:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:33.709 20:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.709 20:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:33.709 20:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:33.709 20:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.709 20:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.709 20:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:33.709 20:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.709 [2024-07-24 20:00:25.123540] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:33.709 [2024-07-24 20:00:25.123576] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:33.709 [2024-07-24 20:00:25.123593] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:33.710 [2024-07-24 20:00:25.123605] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:33.710 [2024-07-24 20:00:25.123613] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:33.710 [2024-07-24 20:00:25.130133] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xcba180 was disconnected and freed. delete nvme_qpair. 00:24:33.710 20:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:33.710 20:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2166183 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2166183 ']' 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2166183 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2166183 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2166183' 00:24:34.648 killing process with pid 2166183 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2166183 00:24:34.648 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2166183 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:34.908 rmmod nvme_tcp 00:24:34.908 rmmod nvme_fabrics 00:24:34.908 rmmod nvme_keyring 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2165938 ']' 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2165938 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2165938 ']' 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2165938 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:34.908 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2165938 00:24:35.166 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:35.166 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:35.166 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2165938' 00:24:35.166 killing process with pid 2165938 00:24:35.166 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2165938 00:24:35.166 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2165938 00:24:35.166 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:35.166 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:35.166 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:35.166 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:35.166 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:35.166 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.166 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.166 20:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:37.706 00:24:37.706 real 0m22.134s 00:24:37.706 user 0m28.883s 00:24:37.706 sys 0m5.367s 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.706 ************************************ 00:24:37.706 END TEST nvmf_discovery_remove_ifc 00:24:37.706 ************************************ 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.706 ************************************ 00:24:37.706 START TEST nvmf_identify_kernel_target 00:24:37.706 ************************************ 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:37.706 * Looking for test storage... 00:24:37.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.706 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:37.707 20:00:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:42.990 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:42.990 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:42.990 Found net devices under 0000:86:00.0: cvl_0_0 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:42.990 Found net devices under 0000:86:00.1: cvl_0_1 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.990 20:00:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.990 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:42.990 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.990 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.990 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.990 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:42.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:24:42.991 00:24:42.991 --- 10.0.0.2 ping statistics --- 00:24:42.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.991 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:24:42.991 00:24:42.991 --- 10.0.0.1 ping statistics --- 00:24:42.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.991 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:42.991 20:00:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:45.529 Waiting for block devices as requested 00:24:45.529 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:45.529 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:45.529 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:45.529 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:45.529 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:45.529 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:45.529 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:45.787 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:45.787 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:45.787 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:45.787 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:46.049 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:46.049 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:46.049 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:46.309 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:46.309 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:46.309 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:46.309 No valid GPT data, bailing 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:46.309 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:46.570 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:46.570 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:46.570 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:46.570 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:46.570 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:46.570 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:46.570 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:46.570 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:46.570 00:24:46.570 Discovery Log Number of Records 2, Generation counter 2 00:24:46.570 =====Discovery Log Entry 0====== 00:24:46.570 trtype: tcp 00:24:46.570 adrfam: ipv4 00:24:46.570 subtype: current discovery subsystem 00:24:46.570 treq: not specified, sq flow control disable supported 00:24:46.570 portid: 1 00:24:46.570 trsvcid: 4420 00:24:46.570 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:46.570 traddr: 10.0.0.1 00:24:46.570 eflags: none 00:24:46.570 sectype: none 00:24:46.570 =====Discovery Log Entry 1====== 00:24:46.570 trtype: tcp 00:24:46.570 adrfam: ipv4 00:24:46.570 subtype: nvme subsystem 00:24:46.570 treq: not specified, sq flow control disable supported 00:24:46.570 portid: 1 00:24:46.570 trsvcid: 4420 00:24:46.570 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:46.570 traddr: 10.0.0.1 00:24:46.570 eflags: none 00:24:46.570 sectype: none 00:24:46.570 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:46.571 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:46.571 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.571 ===================================================== 00:24:46.571 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:46.571 ===================================================== 00:24:46.571 Controller Capabilities/Features 00:24:46.571 ================================ 00:24:46.571 Vendor ID: 0000 00:24:46.571 Subsystem Vendor ID: 0000 00:24:46.571 Serial Number: b5203c83335083b6b83c 00:24:46.571 Model Number: Linux 00:24:46.571 Firmware Version: 6.7.0-68 00:24:46.571 Recommended Arb Burst: 0 00:24:46.571 IEEE OUI Identifier: 00 00 00 00:24:46.571 Multi-path I/O 00:24:46.571 May have multiple subsystem ports: No 00:24:46.571 May have multiple controllers: No 00:24:46.571 Associated with SR-IOV VF: No 00:24:46.571 Max Data Transfer Size: Unlimited 00:24:46.571 Max Number of Namespaces: 0 00:24:46.571 Max Number of I/O Queues: 1024 00:24:46.571 NVMe Specification Version (VS): 1.3 00:24:46.571 NVMe Specification Version (Identify): 1.3 00:24:46.571 Maximum Queue Entries: 1024 00:24:46.571 Contiguous Queues Required: No 00:24:46.571 Arbitration Mechanisms Supported 00:24:46.571 Weighted Round Robin: Not Supported 00:24:46.571 Vendor Specific: Not Supported 00:24:46.571 Reset Timeout: 7500 ms 00:24:46.571 Doorbell Stride: 4 bytes 00:24:46.571 NVM Subsystem Reset: Not Supported 00:24:46.571 Command Sets Supported 00:24:46.571 NVM Command Set: Supported 00:24:46.571 Boot Partition: Not Supported 00:24:46.571 Memory Page Size Minimum: 4096 bytes 00:24:46.571 Memory Page Size Maximum: 4096 bytes 00:24:46.571 Persistent Memory Region: Not Supported 00:24:46.571 Optional Asynchronous Events Supported 00:24:46.571 Namespace Attribute Notices: Not Supported 00:24:46.571 Firmware Activation Notices: Not Supported 00:24:46.571 ANA Change Notices: Not Supported 00:24:46.571 PLE Aggregate Log Change Notices: Not Supported 00:24:46.571 LBA Status Info Alert Notices: Not Supported 00:24:46.571 EGE Aggregate Log Change Notices: Not Supported 00:24:46.571 Normal NVM Subsystem Shutdown event: Not Supported 00:24:46.571 Zone Descriptor Change Notices: Not Supported 00:24:46.571 Discovery Log Change Notices: Supported 00:24:46.571 Controller Attributes 00:24:46.571 128-bit Host Identifier: Not Supported 00:24:46.571 Non-Operational Permissive Mode: Not Supported 00:24:46.571 NVM Sets: Not Supported 00:24:46.571 Read Recovery Levels: Not Supported 00:24:46.571 Endurance Groups: Not Supported 00:24:46.571 Predictable Latency Mode: Not Supported 00:24:46.571 Traffic Based Keep ALive: Not Supported 00:24:46.571 Namespace Granularity: Not Supported 00:24:46.571 SQ Associations: Not Supported 00:24:46.571 UUID List: Not Supported 00:24:46.571 Multi-Domain Subsystem: Not Supported 00:24:46.571 Fixed Capacity Management: Not Supported 00:24:46.571 Variable Capacity Management: Not Supported 00:24:46.571 Delete Endurance Group: Not Supported 00:24:46.571 Delete NVM Set: Not Supported 00:24:46.571 Extended LBA Formats Supported: Not Supported 00:24:46.571 Flexible Data Placement Supported: Not Supported 00:24:46.571 00:24:46.571 Controller Memory Buffer Support 00:24:46.571 ================================ 00:24:46.571 Supported: No 00:24:46.571 00:24:46.571 Persistent Memory Region Support 00:24:46.571 ================================ 00:24:46.571 Supported: No 00:24:46.571 00:24:46.571 Admin Command Set Attributes 00:24:46.571 ============================ 00:24:46.571 Security Send/Receive: Not Supported 00:24:46.571 Format NVM: Not Supported 00:24:46.571 Firmware Activate/Download: Not Supported 00:24:46.571 Namespace Management: Not Supported 00:24:46.571 Device Self-Test: Not Supported 00:24:46.571 Directives: Not Supported 00:24:46.571 NVMe-MI: Not Supported 00:24:46.571 Virtualization Management: Not Supported 00:24:46.571 Doorbell Buffer Config: Not Supported 00:24:46.571 Get LBA Status Capability: Not Supported 00:24:46.571 Command & Feature Lockdown Capability: Not Supported 00:24:46.571 Abort Command Limit: 1 00:24:46.571 Async Event Request Limit: 1 00:24:46.571 Number of Firmware Slots: N/A 00:24:46.571 Firmware Slot 1 Read-Only: N/A 00:24:46.571 Firmware Activation Without Reset: N/A 00:24:46.571 Multiple Update Detection Support: N/A 00:24:46.571 Firmware Update Granularity: No Information Provided 00:24:46.571 Per-Namespace SMART Log: No 00:24:46.571 Asymmetric Namespace Access Log Page: Not Supported 00:24:46.571 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:46.571 Command Effects Log Page: Not Supported 00:24:46.571 Get Log Page Extended Data: Supported 00:24:46.571 Telemetry Log Pages: Not Supported 00:24:46.571 Persistent Event Log Pages: Not Supported 00:24:46.571 Supported Log Pages Log Page: May Support 00:24:46.571 Commands Supported & Effects Log Page: Not Supported 00:24:46.571 Feature Identifiers & Effects Log Page:May Support 00:24:46.571 NVMe-MI Commands & Effects Log Page: May Support 00:24:46.571 Data Area 4 for Telemetry Log: Not Supported 00:24:46.571 Error Log Page Entries Supported: 1 00:24:46.571 Keep Alive: Not Supported 00:24:46.571 00:24:46.571 NVM Command Set Attributes 00:24:46.571 ========================== 00:24:46.571 Submission Queue Entry Size 00:24:46.571 Max: 1 00:24:46.571 Min: 1 00:24:46.571 Completion Queue Entry Size 00:24:46.571 Max: 1 00:24:46.571 Min: 1 00:24:46.571 Number of Namespaces: 0 00:24:46.571 Compare Command: Not Supported 00:24:46.571 Write Uncorrectable Command: Not Supported 00:24:46.571 Dataset Management Command: Not Supported 00:24:46.571 Write Zeroes Command: Not Supported 00:24:46.571 Set Features Save Field: Not Supported 00:24:46.571 Reservations: Not Supported 00:24:46.571 Timestamp: Not Supported 00:24:46.571 Copy: Not Supported 00:24:46.571 Volatile Write Cache: Not Present 00:24:46.571 Atomic Write Unit (Normal): 1 00:24:46.571 Atomic Write Unit (PFail): 1 00:24:46.571 Atomic Compare & Write Unit: 1 00:24:46.571 Fused Compare & Write: Not Supported 00:24:46.571 Scatter-Gather List 00:24:46.571 SGL Command Set: Supported 00:24:46.571 SGL Keyed: Not Supported 00:24:46.571 SGL Bit Bucket Descriptor: Not Supported 00:24:46.571 SGL Metadata Pointer: Not Supported 00:24:46.571 Oversized SGL: Not Supported 00:24:46.571 SGL Metadata Address: Not Supported 00:24:46.571 SGL Offset: Supported 00:24:46.571 Transport SGL Data Block: Not Supported 00:24:46.571 Replay Protected Memory Block: Not Supported 00:24:46.571 00:24:46.571 Firmware Slot Information 00:24:46.571 ========================= 00:24:46.571 Active slot: 0 00:24:46.571 00:24:46.571 00:24:46.571 Error Log 00:24:46.571 ========= 00:24:46.571 00:24:46.571 Active Namespaces 00:24:46.571 ================= 00:24:46.571 Discovery Log Page 00:24:46.571 ================== 00:24:46.571 Generation Counter: 2 00:24:46.571 Number of Records: 2 00:24:46.571 Record Format: 0 00:24:46.571 00:24:46.571 Discovery Log Entry 0 00:24:46.571 ---------------------- 00:24:46.571 Transport Type: 3 (TCP) 00:24:46.571 Address Family: 1 (IPv4) 00:24:46.571 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:46.571 Entry Flags: 00:24:46.571 Duplicate Returned Information: 0 00:24:46.571 Explicit Persistent Connection Support for Discovery: 0 00:24:46.571 Transport Requirements: 00:24:46.571 Secure Channel: Not Specified 00:24:46.571 Port ID: 1 (0x0001) 00:24:46.571 Controller ID: 65535 (0xffff) 00:24:46.571 Admin Max SQ Size: 32 00:24:46.571 Transport Service Identifier: 4420 00:24:46.571 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:46.571 Transport Address: 10.0.0.1 00:24:46.571 Discovery Log Entry 1 00:24:46.571 ---------------------- 00:24:46.571 Transport Type: 3 (TCP) 00:24:46.571 Address Family: 1 (IPv4) 00:24:46.571 Subsystem Type: 2 (NVM Subsystem) 00:24:46.571 Entry Flags: 00:24:46.571 Duplicate Returned Information: 0 00:24:46.571 Explicit Persistent Connection Support for Discovery: 0 00:24:46.571 Transport Requirements: 00:24:46.571 Secure Channel: Not Specified 00:24:46.571 Port ID: 1 (0x0001) 00:24:46.571 Controller ID: 65535 (0xffff) 00:24:46.571 Admin Max SQ Size: 32 00:24:46.571 Transport Service Identifier: 4420 00:24:46.571 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:46.571 Transport Address: 10.0.0.1 00:24:46.572 20:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:46.572 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.572 get_feature(0x01) failed 00:24:46.572 get_feature(0x02) failed 00:24:46.572 get_feature(0x04) failed 00:24:46.572 ===================================================== 00:24:46.572 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:46.572 ===================================================== 00:24:46.572 Controller Capabilities/Features 00:24:46.572 ================================ 00:24:46.572 Vendor ID: 0000 00:24:46.572 Subsystem Vendor ID: 0000 00:24:46.572 Serial Number: ddbe44619d4ed43e599e 00:24:46.572 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:46.572 Firmware Version: 6.7.0-68 00:24:46.572 Recommended Arb Burst: 6 00:24:46.572 IEEE OUI Identifier: 00 00 00 00:24:46.572 Multi-path I/O 00:24:46.572 May have multiple subsystem ports: Yes 00:24:46.572 May have multiple controllers: Yes 00:24:46.572 Associated with SR-IOV VF: No 00:24:46.572 Max Data Transfer Size: Unlimited 00:24:46.572 Max Number of Namespaces: 1024 00:24:46.572 Max Number of I/O Queues: 128 00:24:46.572 NVMe Specification Version (VS): 1.3 00:24:46.572 NVMe Specification Version (Identify): 1.3 00:24:46.572 Maximum Queue Entries: 1024 00:24:46.572 Contiguous Queues Required: No 00:24:46.572 Arbitration Mechanisms Supported 00:24:46.572 Weighted Round Robin: Not Supported 00:24:46.572 Vendor Specific: Not Supported 00:24:46.572 Reset Timeout: 7500 ms 00:24:46.572 Doorbell Stride: 4 bytes 00:24:46.572 NVM Subsystem Reset: Not Supported 00:24:46.572 Command Sets Supported 00:24:46.572 NVM Command Set: Supported 00:24:46.572 Boot Partition: Not Supported 00:24:46.572 Memory Page Size Minimum: 4096 bytes 00:24:46.572 Memory Page Size Maximum: 4096 bytes 00:24:46.572 Persistent Memory Region: Not Supported 00:24:46.572 Optional Asynchronous Events Supported 00:24:46.572 Namespace Attribute Notices: Supported 00:24:46.572 Firmware Activation Notices: Not Supported 00:24:46.572 ANA Change Notices: Supported 00:24:46.572 PLE Aggregate Log Change Notices: Not Supported 00:24:46.572 LBA Status Info Alert Notices: Not Supported 00:24:46.572 EGE Aggregate Log Change Notices: Not Supported 00:24:46.572 Normal NVM Subsystem Shutdown event: Not Supported 00:24:46.572 Zone Descriptor Change Notices: Not Supported 00:24:46.572 Discovery Log Change Notices: Not Supported 00:24:46.572 Controller Attributes 00:24:46.572 128-bit Host Identifier: Supported 00:24:46.572 Non-Operational Permissive Mode: Not Supported 00:24:46.572 NVM Sets: Not Supported 00:24:46.572 Read Recovery Levels: Not Supported 00:24:46.572 Endurance Groups: Not Supported 00:24:46.572 Predictable Latency Mode: Not Supported 00:24:46.572 Traffic Based Keep ALive: Supported 00:24:46.572 Namespace Granularity: Not Supported 00:24:46.572 SQ Associations: Not Supported 00:24:46.572 UUID List: Not Supported 00:24:46.572 Multi-Domain Subsystem: Not Supported 00:24:46.572 Fixed Capacity Management: Not Supported 00:24:46.572 Variable Capacity Management: Not Supported 00:24:46.572 Delete Endurance Group: Not Supported 00:24:46.572 Delete NVM Set: Not Supported 00:24:46.572 Extended LBA Formats Supported: Not Supported 00:24:46.572 Flexible Data Placement Supported: Not Supported 00:24:46.572 00:24:46.572 Controller Memory Buffer Support 00:24:46.572 ================================ 00:24:46.572 Supported: No 00:24:46.572 00:24:46.572 Persistent Memory Region Support 00:24:46.572 ================================ 00:24:46.572 Supported: No 00:24:46.572 00:24:46.572 Admin Command Set Attributes 00:24:46.572 ============================ 00:24:46.572 Security Send/Receive: Not Supported 00:24:46.572 Format NVM: Not Supported 00:24:46.572 Firmware Activate/Download: Not Supported 00:24:46.572 Namespace Management: Not Supported 00:24:46.572 Device Self-Test: Not Supported 00:24:46.572 Directives: Not Supported 00:24:46.572 NVMe-MI: Not Supported 00:24:46.572 Virtualization Management: Not Supported 00:24:46.572 Doorbell Buffer Config: Not Supported 00:24:46.572 Get LBA Status Capability: Not Supported 00:24:46.572 Command & Feature Lockdown Capability: Not Supported 00:24:46.572 Abort Command Limit: 4 00:24:46.572 Async Event Request Limit: 4 00:24:46.572 Number of Firmware Slots: N/A 00:24:46.572 Firmware Slot 1 Read-Only: N/A 00:24:46.572 Firmware Activation Without Reset: N/A 00:24:46.572 Multiple Update Detection Support: N/A 00:24:46.572 Firmware Update Granularity: No Information Provided 00:24:46.572 Per-Namespace SMART Log: Yes 00:24:46.572 Asymmetric Namespace Access Log Page: Supported 00:24:46.572 ANA Transition Time : 10 sec 00:24:46.572 00:24:46.572 Asymmetric Namespace Access Capabilities 00:24:46.572 ANA Optimized State : Supported 00:24:46.572 ANA Non-Optimized State : Supported 00:24:46.572 ANA Inaccessible State : Supported 00:24:46.572 ANA Persistent Loss State : Supported 00:24:46.572 ANA Change State : Supported 00:24:46.572 ANAGRPID is not changed : No 00:24:46.572 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:46.572 00:24:46.572 ANA Group Identifier Maximum : 128 00:24:46.572 Number of ANA Group Identifiers : 128 00:24:46.572 Max Number of Allowed Namespaces : 1024 00:24:46.572 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:46.572 Command Effects Log Page: Supported 00:24:46.572 Get Log Page Extended Data: Supported 00:24:46.572 Telemetry Log Pages: Not Supported 00:24:46.572 Persistent Event Log Pages: Not Supported 00:24:46.572 Supported Log Pages Log Page: May Support 00:24:46.572 Commands Supported & Effects Log Page: Not Supported 00:24:46.572 Feature Identifiers & Effects Log Page:May Support 00:24:46.572 NVMe-MI Commands & Effects Log Page: May Support 00:24:46.572 Data Area 4 for Telemetry Log: Not Supported 00:24:46.572 Error Log Page Entries Supported: 128 00:24:46.572 Keep Alive: Supported 00:24:46.572 Keep Alive Granularity: 1000 ms 00:24:46.572 00:24:46.572 NVM Command Set Attributes 00:24:46.572 ========================== 00:24:46.572 Submission Queue Entry Size 00:24:46.572 Max: 64 00:24:46.572 Min: 64 00:24:46.572 Completion Queue Entry Size 00:24:46.572 Max: 16 00:24:46.572 Min: 16 00:24:46.572 Number of Namespaces: 1024 00:24:46.572 Compare Command: Not Supported 00:24:46.572 Write Uncorrectable Command: Not Supported 00:24:46.572 Dataset Management Command: Supported 00:24:46.572 Write Zeroes Command: Supported 00:24:46.572 Set Features Save Field: Not Supported 00:24:46.572 Reservations: Not Supported 00:24:46.572 Timestamp: Not Supported 00:24:46.572 Copy: Not Supported 00:24:46.572 Volatile Write Cache: Present 00:24:46.572 Atomic Write Unit (Normal): 1 00:24:46.572 Atomic Write Unit (PFail): 1 00:24:46.572 Atomic Compare & Write Unit: 1 00:24:46.572 Fused Compare & Write: Not Supported 00:24:46.572 Scatter-Gather List 00:24:46.572 SGL Command Set: Supported 00:24:46.572 SGL Keyed: Not Supported 00:24:46.572 SGL Bit Bucket Descriptor: Not Supported 00:24:46.572 SGL Metadata Pointer: Not Supported 00:24:46.572 Oversized SGL: Not Supported 00:24:46.572 SGL Metadata Address: Not Supported 00:24:46.572 SGL Offset: Supported 00:24:46.572 Transport SGL Data Block: Not Supported 00:24:46.572 Replay Protected Memory Block: Not Supported 00:24:46.572 00:24:46.572 Firmware Slot Information 00:24:46.572 ========================= 00:24:46.572 Active slot: 0 00:24:46.572 00:24:46.572 Asymmetric Namespace Access 00:24:46.572 =========================== 00:24:46.572 Change Count : 0 00:24:46.572 Number of ANA Group Descriptors : 1 00:24:46.572 ANA Group Descriptor : 0 00:24:46.572 ANA Group ID : 1 00:24:46.572 Number of NSID Values : 1 00:24:46.572 Change Count : 0 00:24:46.572 ANA State : 1 00:24:46.572 Namespace Identifier : 1 00:24:46.572 00:24:46.572 Commands Supported and Effects 00:24:46.572 ============================== 00:24:46.572 Admin Commands 00:24:46.572 -------------- 00:24:46.572 Get Log Page (02h): Supported 00:24:46.572 Identify (06h): Supported 00:24:46.572 Abort (08h): Supported 00:24:46.572 Set Features (09h): Supported 00:24:46.572 Get Features (0Ah): Supported 00:24:46.572 Asynchronous Event Request (0Ch): Supported 00:24:46.572 Keep Alive (18h): Supported 00:24:46.572 I/O Commands 00:24:46.572 ------------ 00:24:46.573 Flush (00h): Supported 00:24:46.573 Write (01h): Supported LBA-Change 00:24:46.573 Read (02h): Supported 00:24:46.573 Write Zeroes (08h): Supported LBA-Change 00:24:46.573 Dataset Management (09h): Supported 00:24:46.573 00:24:46.573 Error Log 00:24:46.573 ========= 00:24:46.573 Entry: 0 00:24:46.573 Error Count: 0x3 00:24:46.573 Submission Queue Id: 0x0 00:24:46.573 Command Id: 0x5 00:24:46.573 Phase Bit: 0 00:24:46.573 Status Code: 0x2 00:24:46.573 Status Code Type: 0x0 00:24:46.573 Do Not Retry: 1 00:24:46.573 Error Location: 0x28 00:24:46.573 LBA: 0x0 00:24:46.573 Namespace: 0x0 00:24:46.573 Vendor Log Page: 0x0 00:24:46.573 ----------- 00:24:46.573 Entry: 1 00:24:46.573 Error Count: 0x2 00:24:46.573 Submission Queue Id: 0x0 00:24:46.573 Command Id: 0x5 00:24:46.573 Phase Bit: 0 00:24:46.573 Status Code: 0x2 00:24:46.573 Status Code Type: 0x0 00:24:46.573 Do Not Retry: 1 00:24:46.573 Error Location: 0x28 00:24:46.573 LBA: 0x0 00:24:46.573 Namespace: 0x0 00:24:46.573 Vendor Log Page: 0x0 00:24:46.573 ----------- 00:24:46.573 Entry: 2 00:24:46.573 Error Count: 0x1 00:24:46.573 Submission Queue Id: 0x0 00:24:46.573 Command Id: 0x4 00:24:46.573 Phase Bit: 0 00:24:46.573 Status Code: 0x2 00:24:46.573 Status Code Type: 0x0 00:24:46.573 Do Not Retry: 1 00:24:46.573 Error Location: 0x28 00:24:46.573 LBA: 0x0 00:24:46.573 Namespace: 0x0 00:24:46.573 Vendor Log Page: 0x0 00:24:46.573 00:24:46.573 Number of Queues 00:24:46.573 ================ 00:24:46.573 Number of I/O Submission Queues: 128 00:24:46.573 Number of I/O Completion Queues: 128 00:24:46.573 00:24:46.573 ZNS Specific Controller Data 00:24:46.573 ============================ 00:24:46.573 Zone Append Size Limit: 0 00:24:46.573 00:24:46.573 00:24:46.573 Active Namespaces 00:24:46.573 ================= 00:24:46.573 get_feature(0x05) failed 00:24:46.573 Namespace ID:1 00:24:46.573 Command Set Identifier: NVM (00h) 00:24:46.573 Deallocate: Supported 00:24:46.573 Deallocated/Unwritten Error: Not Supported 00:24:46.573 Deallocated Read Value: Unknown 00:24:46.573 Deallocate in Write Zeroes: Not Supported 00:24:46.573 Deallocated Guard Field: 0xFFFF 00:24:46.573 Flush: Supported 00:24:46.573 Reservation: Not Supported 00:24:46.573 Namespace Sharing Capabilities: Multiple Controllers 00:24:46.573 Size (in LBAs): 1953525168 (931GiB) 00:24:46.573 Capacity (in LBAs): 1953525168 (931GiB) 00:24:46.573 Utilization (in LBAs): 1953525168 (931GiB) 00:24:46.573 UUID: 351b22db-90c3-4eb9-bf46-61d4cc181284 00:24:46.573 Thin Provisioning: Not Supported 00:24:46.573 Per-NS Atomic Units: Yes 00:24:46.573 Atomic Boundary Size (Normal): 0 00:24:46.573 Atomic Boundary Size (PFail): 0 00:24:46.573 Atomic Boundary Offset: 0 00:24:46.573 NGUID/EUI64 Never Reused: No 00:24:46.573 ANA group ID: 1 00:24:46.573 Namespace Write Protected: No 00:24:46.573 Number of LBA Formats: 1 00:24:46.573 Current LBA Format: LBA Format #00 00:24:46.573 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:46.573 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:46.573 rmmod nvme_tcp 00:24:46.573 rmmod nvme_fabrics 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.573 20:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.150 20:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:49.150 20:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:49.150 20:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:49.150 20:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:49.150 20:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:49.150 20:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:49.150 20:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:49.150 20:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:49.150 20:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:49.150 20:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:49.150 20:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:51.076 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:51.076 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:51.076 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:51.076 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:51.076 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:51.076 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:51.076 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:51.076 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:51.076 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:51.076 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:51.076 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:51.076 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:51.076 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:51.076 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:51.076 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:51.076 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:52.016 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:52.016 00:24:52.016 real 0m14.767s 00:24:52.016 user 0m3.489s 00:24:52.016 sys 0m7.636s 00:24:52.016 20:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:52.016 20:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.016 ************************************ 00:24:52.016 END TEST nvmf_identify_kernel_target 00:24:52.016 ************************************ 00:24:52.016 20:00:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:52.016 20:00:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:52.016 20:00:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:52.016 20:00:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.275 ************************************ 00:24:52.275 START TEST nvmf_auth_host 00:24:52.275 ************************************ 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:52.275 * Looking for test storage... 00:24:52.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.275 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:52.276 20:00:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:57.551 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:57.551 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:57.551 Found net devices under 0000:86:00.0: cvl_0_0 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:57.551 20:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:57.551 Found net devices under 0000:86:00.1: cvl_0_1 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:57.551 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:57.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:24:57.812 00:24:57.812 --- 10.0.0.2 ping statistics --- 00:24:57.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.812 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:24:57.812 00:24:57.812 --- 10.0.0.1 ping statistics --- 00:24:57.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.812 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2177724 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2177724 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2177724 ']' 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:57.812 20:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9505bb3187732ac25904018d36356ff8 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.C8g 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9505bb3187732ac25904018d36356ff8 0 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9505bb3187732ac25904018d36356ff8 0 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9505bb3187732ac25904018d36356ff8 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.C8g 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.C8g 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.C8g 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b9aae9df6e0cd4252e4592e0aa384d107e5d7d53e591acb8658c295b4ac7a80d 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ZjU 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b9aae9df6e0cd4252e4592e0aa384d107e5d7d53e591acb8658c295b4ac7a80d 3 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b9aae9df6e0cd4252e4592e0aa384d107e5d7d53e591acb8658c295b4ac7a80d 3 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b9aae9df6e0cd4252e4592e0aa384d107e5d7d53e591acb8658c295b4ac7a80d 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ZjU 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ZjU 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ZjU 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8fb5007e9bd397eb65edb6f372065f6f402891db676b534f 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.FvV 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8fb5007e9bd397eb65edb6f372065f6f402891db676b534f 0 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8fb5007e9bd397eb65edb6f372065f6f402891db676b534f 0 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8fb5007e9bd397eb65edb6f372065f6f402891db676b534f 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:58.750 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.FvV 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.FvV 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.FvV 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=808d9eb8d906f57e4e1b2be633539e36516c1d86c7e8a187 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.DZM 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 808d9eb8d906f57e4e1b2be633539e36516c1d86c7e8a187 2 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 808d9eb8d906f57e4e1b2be633539e36516c1d86c7e8a187 2 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=808d9eb8d906f57e4e1b2be633539e36516c1d86c7e8a187 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.DZM 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.DZM 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.DZM 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=099e6b19c3728c13ab950e0fdf14c436 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.FX0 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 099e6b19c3728c13ab950e0fdf14c436 1 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 099e6b19c3728c13ab950e0fdf14c436 1 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=099e6b19c3728c13ab950e0fdf14c436 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.FX0 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.FX0 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.FX0 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=36075d9392be750ddad02c66564d021d 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.neX 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 36075d9392be750ddad02c66564d021d 1 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 36075d9392be750ddad02c66564d021d 1 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=36075d9392be750ddad02c66564d021d 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.neX 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.neX 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.neX 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=997bfd276993a97f3859a7d99f2cef28cfe9398db4ef319b 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.rTQ 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 997bfd276993a97f3859a7d99f2cef28cfe9398db4ef319b 2 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 997bfd276993a97f3859a7d99f2cef28cfe9398db4ef319b 2 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:59.010 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=997bfd276993a97f3859a7d99f2cef28cfe9398db4ef319b 00:24:59.011 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:59.011 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:59.011 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.rTQ 00:24:59.011 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.rTQ 00:24:59.011 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.rTQ 00:24:59.011 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:59.011 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:59.011 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.011 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:59.011 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:59.011 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:59.011 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:59.011 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=60d108b00edabcfa1e901cd14faa3eea 00:24:59.011 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.FOO 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 60d108b00edabcfa1e901cd14faa3eea 0 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 60d108b00edabcfa1e901cd14faa3eea 0 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=60d108b00edabcfa1e901cd14faa3eea 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.FOO 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.FOO 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.FOO 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1e8d36fa73895867d90954e0856b9c7bf2275ce701aaf50cdd4fcfa17d609e72 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.EVT 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1e8d36fa73895867d90954e0856b9c7bf2275ce701aaf50cdd4fcfa17d609e72 3 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1e8d36fa73895867d90954e0856b9c7bf2275ce701aaf50cdd4fcfa17d609e72 3 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1e8d36fa73895867d90954e0856b9c7bf2275ce701aaf50cdd4fcfa17d609e72 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.EVT 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.EVT 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.EVT 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2177724 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2177724 ']' 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:59.269 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.529 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:59.529 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.C8g 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ZjU ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZjU 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.FvV 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.DZM ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DZM 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.FX0 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.neX ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.neX 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.rTQ 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.FOO ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.FOO 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.EVT 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:59.530 20:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:59.530 20:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:59.530 20:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:02.068 Waiting for block devices as requested 00:25:02.068 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:02.328 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:02.328 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:02.328 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:02.328 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:02.587 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:02.587 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:02.587 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:02.587 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:02.848 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:02.848 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:02.848 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:03.108 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:03.108 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:03.108 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:03.108 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:03.389 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:03.966 No valid GPT data, bailing 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:03.966 00:25:03.966 Discovery Log Number of Records 2, Generation counter 2 00:25:03.966 =====Discovery Log Entry 0====== 00:25:03.966 trtype: tcp 00:25:03.966 adrfam: ipv4 00:25:03.966 subtype: current discovery subsystem 00:25:03.966 treq: not specified, sq flow control disable supported 00:25:03.966 portid: 1 00:25:03.966 trsvcid: 4420 00:25:03.966 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:03.966 traddr: 10.0.0.1 00:25:03.966 eflags: none 00:25:03.966 sectype: none 00:25:03.966 =====Discovery Log Entry 1====== 00:25:03.966 trtype: tcp 00:25:03.966 adrfam: ipv4 00:25:03.966 subtype: nvme subsystem 00:25:03.966 treq: not specified, sq flow control disable supported 00:25:03.966 portid: 1 00:25:03.966 trsvcid: 4420 00:25:03.966 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:03.966 traddr: 10.0.0.1 00:25:03.966 eflags: none 00:25:03.966 sectype: none 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.966 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.225 nvme0n1 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: ]] 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.225 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.226 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.226 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.226 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.226 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.226 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.226 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.226 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.226 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:04.226 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.226 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.226 nvme0n1 00:25:04.226 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.226 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.226 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.226 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.226 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.226 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.485 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.486 nvme0n1 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.486 20:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: ]] 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.486 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.746 nvme0n1 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.746 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: ]] 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.747 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.007 nvme0n1 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.007 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.008 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.268 nvme0n1 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: ]] 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.268 nvme0n1 00:25:05.268 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.527 20:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.527 nvme0n1 00:25:05.527 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.527 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.527 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.527 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.527 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.527 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.527 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.527 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.527 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.527 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: ]] 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.787 nvme0n1 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.787 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:05.788 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.788 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.788 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.788 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:05.788 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:05.788 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:05.788 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.788 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.788 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:05.788 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: ]] 00:25:05.788 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:05.788 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:05.788 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.788 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.788 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.048 nvme0n1 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.048 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.049 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.308 nvme0n1 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: ]] 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.308 20:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.567 nvme0n1 00:25:06.567 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.567 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.567 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.567 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.567 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.568 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.568 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.568 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.568 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.568 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.828 nvme0n1 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.828 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: ]] 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.087 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.347 nvme0n1 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: ]] 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.347 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.348 20:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.608 nvme0n1 00:25:07.608 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.608 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.608 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.608 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.608 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.608 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.608 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.608 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.608 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.608 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.608 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.608 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.608 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:07.608 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.608 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.608 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.609 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.869 nvme0n1 00:25:07.869 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.869 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.869 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.869 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.869 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.869 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.869 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.869 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.869 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.869 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.869 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: ]] 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.870 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.440 nvme0n1 00:25:08.440 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.440 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.440 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.440 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.440 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.440 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.440 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.440 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.440 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.440 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.440 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.440 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.440 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:08.440 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.440 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.441 20:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.700 nvme0n1 00:25:08.700 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.700 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.700 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.700 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: ]] 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.701 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.272 nvme0n1 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:09.272 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: ]] 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.273 20:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.533 nvme0n1 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.533 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.534 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.534 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.534 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.534 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.534 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.534 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.534 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:09.534 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.534 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.104 nvme0n1 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: ]] 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:10.104 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.105 20:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.721 nvme0n1 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.721 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.291 nvme0n1 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: ]] 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.291 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.292 20:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.860 nvme0n1 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: ]] 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.860 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.861 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.861 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.861 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.431 nvme0n1 00:25:12.431 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.431 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.431 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.431 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.431 20:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.431 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.691 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.261 nvme0n1 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.261 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: ]] 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.262 nvme0n1 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.262 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:13.524 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.525 20:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.525 nvme0n1 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: ]] 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:13.525 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.526 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.788 nvme0n1 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: ]] 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.788 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.048 nvme0n1 00:25:14.048 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.048 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.048 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.049 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.308 nvme0n1 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: ]] 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.308 nvme0n1 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.308 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.568 20:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.568 nvme0n1 00:25:14.568 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.568 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.568 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.568 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.568 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.568 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.568 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.568 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.568 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.568 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: ]] 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.829 nvme0n1 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: ]] 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:14.829 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.830 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.830 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.830 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.830 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.830 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.830 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.830 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.830 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.830 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.830 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.088 nvme0n1 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.088 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.089 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.349 nvme0n1 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: ]] 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.349 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.350 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.350 20:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.609 nvme0n1 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.609 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:15.869 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.870 nvme0n1 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.870 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: ]] 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.130 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.390 nvme0n1 00:25:16.390 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.390 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.390 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.390 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.390 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.390 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.390 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.390 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.390 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.390 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.390 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.390 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.390 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: ]] 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.391 20:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.650 nvme0n1 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.650 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.909 nvme0n1 00:25:16.909 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.909 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.909 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: ]] 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.910 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.480 nvme0n1 00:25:17.480 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.480 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.480 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.480 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.480 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.480 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.480 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.480 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.480 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.480 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.480 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.480 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.480 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:17.480 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.481 20:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.741 nvme0n1 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: ]] 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.741 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.312 nvme0n1 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: ]] 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.312 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.313 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.313 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.313 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.313 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.313 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.313 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.313 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.313 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.313 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.313 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.313 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.313 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:18.313 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.313 20:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.573 nvme0n1 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.573 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.833 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.833 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.833 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.833 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.833 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.833 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.833 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.833 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.833 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.833 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.833 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.833 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.833 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:18.833 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.833 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.094 nvme0n1 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: ]] 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.094 20:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.663 nvme0n1 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.663 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.924 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.924 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.924 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.924 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.924 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.924 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.924 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.924 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.924 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.924 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.924 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.924 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.924 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.924 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.924 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.495 nvme0n1 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: ]] 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:20.495 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.496 20:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.066 nvme0n1 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: ]] 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.066 20:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.638 nvme0n1 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.638 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.218 nvme0n1 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: ]] 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.218 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.478 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.478 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.478 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.478 nvme0n1 00:25:22.478 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.478 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.478 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.478 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.478 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.478 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.478 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.478 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.478 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.478 20:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.478 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.738 nvme0n1 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: ]] 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.738 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.997 nvme0n1 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.997 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: ]] 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.998 nvme0n1 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.998 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.258 nvme0n1 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.258 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: ]] 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.259 20:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.519 nvme0n1 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.519 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.520 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.520 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.520 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.520 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.520 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.520 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.520 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.520 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.780 nvme0n1 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: ]] 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.780 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.039 nvme0n1 00:25:24.039 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.039 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.039 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.039 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.039 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.039 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.039 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.039 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: ]] 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.040 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.300 nvme0n1 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.300 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.560 nvme0n1 00:25:24.560 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.560 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.560 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.560 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.560 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.560 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.560 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.560 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.560 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.560 20:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: ]] 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.560 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.821 nvme0n1 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:24.821 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.822 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.822 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.822 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.822 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.822 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.822 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.822 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.822 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.822 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.822 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.822 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.822 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.822 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.822 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.822 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.822 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.081 nvme0n1 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: ]] 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.081 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.339 nvme0n1 00:25:25.339 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.339 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.339 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.339 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.339 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.339 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.339 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.339 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.339 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.339 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: ]] 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.599 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.600 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.600 20:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.600 nvme0n1 00:25:25.600 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.600 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.600 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.600 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.600 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.859 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.121 nvme0n1 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: ]] 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.121 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.389 nvme0n1 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.389 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:26.390 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.390 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.648 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.648 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.648 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.648 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.648 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.648 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.648 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.648 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.648 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.648 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.648 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.648 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.648 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.648 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.648 20:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.912 nvme0n1 00:25:26.912 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.912 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.912 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.912 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.912 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.912 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.912 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.912 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.912 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.912 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.912 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.912 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.912 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:26.912 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: ]] 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.913 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.481 nvme0n1 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: ]] 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.481 20:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.740 nvme0n1 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.740 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.310 nvme0n1 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUwNWJiMzE4NzczMmFjMjU5MDQwMThkMzYzNTZmZjjQs8re: 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: ]] 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjlhYWU5ZGY2ZTBjZDQyNTJlNDU5MmUwYWEzODRkMTA3ZTVkN2Q1M2U1OTFhY2I4NjU4YzI5NWI0YWM3YTgwZKvtudI=: 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.310 20:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.877 nvme0n1 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.877 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.445 nvme0n1 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk5ZTZiMTljMzcyOGMxM2FiOTUwZTBmZGYxNGM0MzbUtyAx: 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: ]] 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwNzVkOTM5MmJlNzUwZGRhZDAyYzY2NTY0ZDAyMWRIDzPq: 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.445 20:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.445 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.445 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.014 nvme0n1 00:25:30.014 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.014 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.014 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.014 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.014 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.014 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.014 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.014 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.014 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.014 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk3YmZkMjc2OTkzYTk3ZjM4NTlhN2Q5OWYyY2VmMjhjZmU5Mzk4ZGI0ZWYzMTlicZe0JA==: 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: ]] 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBkMTA4YjAwZWRhYmNmYTFlOTAxY2QxNGZhYTNlZWHoV9pa: 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.273 20:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.840 nvme0n1 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWU4ZDM2ZmE3Mzg5NTg2N2Q5MDk1NGUwODU2YjljN2JmMjI3NWNlNzAxYWFmNTBjZGQ0ZmNmYTE3ZDYwOWU3MqcML6s=: 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.840 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.411 nvme0n1 00:25:31.411 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.411 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.411 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.411 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.411 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.411 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.411 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.411 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.411 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.411 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.411 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.411 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:31.411 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.411 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.411 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZiNTAwN2U5YmQzOTdlYjY1ZWRiNmYzNzIwNjVmNmY0MDI4OTFkYjY3NmI1MzRmTQubpg==: 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: ]] 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODA4ZDllYjhkOTA2ZjU3ZTRlMWIyYmU2MzM1MzllMzY1MTZjMWQ4NmM3ZThhMTg3OkhRaA==: 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.412 request: 00:25:31.412 { 00:25:31.412 "name": "nvme0", 00:25:31.412 "trtype": "tcp", 00:25:31.412 "traddr": "10.0.0.1", 00:25:31.412 "adrfam": "ipv4", 00:25:31.412 "trsvcid": "4420", 00:25:31.412 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:31.412 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:31.412 "prchk_reftag": false, 00:25:31.412 "prchk_guard": false, 00:25:31.412 "hdgst": false, 00:25:31.412 "ddgst": false, 00:25:31.412 "method": "bdev_nvme_attach_controller", 00:25:31.412 "req_id": 1 00:25:31.412 } 00:25:31.412 Got JSON-RPC error response 00:25:31.412 response: 00:25:31.412 { 00:25:31.412 "code": -5, 00:25:31.412 "message": "Input/output error" 00:25:31.412 } 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.412 20:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.672 request: 00:25:31.672 { 00:25:31.672 "name": "nvme0", 00:25:31.672 "trtype": "tcp", 00:25:31.672 "traddr": "10.0.0.1", 00:25:31.672 "adrfam": "ipv4", 00:25:31.672 "trsvcid": "4420", 00:25:31.672 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:31.672 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:31.672 "prchk_reftag": false, 00:25:31.672 "prchk_guard": false, 00:25:31.672 "hdgst": false, 00:25:31.672 "ddgst": false, 00:25:31.672 "dhchap_key": "key2", 00:25:31.672 "method": "bdev_nvme_attach_controller", 00:25:31.672 "req_id": 1 00:25:31.672 } 00:25:31.672 Got JSON-RPC error response 00:25:31.672 response: 00:25:31.672 { 00:25:31.672 "code": -5, 00:25:31.672 "message": "Input/output error" 00:25:31.672 } 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.672 request: 00:25:31.672 { 00:25:31.672 "name": "nvme0", 00:25:31.672 "trtype": "tcp", 00:25:31.672 "traddr": "10.0.0.1", 00:25:31.672 "adrfam": "ipv4", 00:25:31.672 "trsvcid": "4420", 00:25:31.672 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:31.672 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:31.672 "prchk_reftag": false, 00:25:31.672 "prchk_guard": false, 00:25:31.672 "hdgst": false, 00:25:31.672 "ddgst": false, 00:25:31.672 "dhchap_key": "key1", 00:25:31.672 "dhchap_ctrlr_key": "ckey2", 00:25:31.672 "method": "bdev_nvme_attach_controller", 00:25:31.672 "req_id": 1 00:25:31.672 } 00:25:31.672 Got JSON-RPC error response 00:25:31.672 response: 00:25:31.672 { 00:25:31.672 "code": -5, 00:25:31.672 "message": "Input/output error" 00:25:31.672 } 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:31.672 rmmod nvme_tcp 00:25:31.672 rmmod nvme_fabrics 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2177724 ']' 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2177724 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2177724 ']' 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2177724 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2177724 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2177724' 00:25:31.672 killing process with pid 2177724 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2177724 00:25:31.672 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2177724 00:25:31.930 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:31.930 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:31.930 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:31.930 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.930 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:31.930 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.930 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.930 20:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.835 20:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:33.835 20:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:33.835 20:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:33.835 20:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:33.835 20:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:33.835 20:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:33.835 20:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:34.095 20:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:34.095 20:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:34.095 20:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:34.095 20:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:34.095 20:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:34.095 20:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:36.632 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:36.632 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:36.632 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:36.632 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:36.632 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:36.632 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:36.632 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:36.632 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:36.632 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:36.632 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:36.632 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:36.632 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:36.632 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:36.632 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:36.632 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:36.632 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:37.572 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:37.572 20:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.C8g /tmp/spdk.key-null.FvV /tmp/spdk.key-sha256.FX0 /tmp/spdk.key-sha384.rTQ /tmp/spdk.key-sha512.EVT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:37.572 20:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:40.109 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:40.109 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:40.109 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:40.109 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:40.109 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:40.109 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:40.109 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:40.109 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:40.109 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:40.109 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:40.109 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:40.109 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:40.109 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:40.109 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:40.109 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:40.109 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:40.109 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:40.109 00:25:40.109 real 0m48.040s 00:25:40.109 user 0m43.157s 00:25:40.109 sys 0m11.686s 00:25:40.109 20:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:40.109 20:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.109 ************************************ 00:25:40.109 END TEST nvmf_auth_host 00:25:40.109 ************************************ 00:25:40.109 20:01:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:40.109 20:01:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:40.109 20:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:40.109 20:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:40.109 20:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.374 ************************************ 00:25:40.374 START TEST nvmf_digest 00:25:40.374 ************************************ 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:40.374 * Looking for test storage... 00:25:40.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:40.374 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:40.375 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:40.375 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:40.375 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:40.375 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.375 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:40.375 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:40.375 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:40.375 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.375 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.375 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.375 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:40.375 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:40.375 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:40.375 20:01:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:45.660 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:45.660 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:45.660 Found net devices under 0000:86:00.0: cvl_0_0 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:45.660 Found net devices under 0000:86:00.1: cvl_0_1 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.660 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.661 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.661 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:45.661 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.661 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.661 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:45.661 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.661 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.661 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:45.661 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:45.661 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.661 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.661 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.661 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.661 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:45.661 20:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:45.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:25:45.661 00:25:45.661 --- 10.0.0.2 ping statistics --- 00:25:45.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.661 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:25:45.661 00:25:45.661 --- 10.0.0.1 ping statistics --- 00:25:45.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.661 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:45.661 ************************************ 00:25:45.661 START TEST nvmf_digest_clean 00:25:45.661 ************************************ 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2190740 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2190740 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2190740 ']' 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:45.661 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:45.661 [2024-07-24 20:01:37.171538] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:25:45.661 [2024-07-24 20:01:37.171578] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.661 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.661 [2024-07-24 20:01:37.228623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.920 [2024-07-24 20:01:37.308876] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.920 [2024-07-24 20:01:37.308911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.920 [2024-07-24 20:01:37.308919] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.920 [2024-07-24 20:01:37.308925] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.920 [2024-07-24 20:01:37.308930] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.920 [2024-07-24 20:01:37.308947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.488 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:46.488 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:46.488 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:46.488 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:46.488 20:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:46.488 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.488 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:46.488 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:46.488 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:46.488 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.488 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:46.746 null0 00:25:46.746 [2024-07-24 20:01:38.096061] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.746 [2024-07-24 20:01:38.120253] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.746 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.746 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:46.746 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:46.746 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:46.746 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:46.746 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:46.746 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:46.746 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:46.746 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2190879 00:25:46.746 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2190879 /var/tmp/bperf.sock 00:25:46.746 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:46.746 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2190879 ']' 00:25:46.747 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:46.747 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:46.747 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:46.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:46.747 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:46.747 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:46.747 [2024-07-24 20:01:38.171194] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:25:46.747 [2024-07-24 20:01:38.171236] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190879 ] 00:25:46.747 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.747 [2024-07-24 20:01:38.223943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.747 [2024-07-24 20:01:38.302528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.685 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:47.685 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:47.685 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:47.685 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:47.685 20:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:47.685 20:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.685 20:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.944 nvme0n1 00:25:47.944 20:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:47.945 20:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:48.203 Running I/O for 2 seconds... 00:25:50.168 00:25:50.168 Latency(us) 00:25:50.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.168 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:50.168 nvme0n1 : 2.00 26358.79 102.96 0.00 0.00 4849.35 2265.27 22681.15 00:25:50.168 =================================================================================================================== 00:25:50.168 Total : 26358.79 102.96 0.00 0.00 4849.35 2265.27 22681.15 00:25:50.168 0 00:25:50.168 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:50.168 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:50.168 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:50.168 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:50.168 | select(.opcode=="crc32c") 00:25:50.168 | "\(.module_name) \(.executed)"' 00:25:50.168 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:50.428 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:50.428 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:50.428 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:50.428 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:50.428 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2190879 00:25:50.428 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2190879 ']' 00:25:50.428 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2190879 00:25:50.428 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:50.428 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:50.428 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2190879 00:25:50.428 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:50.428 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:50.428 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2190879' 00:25:50.428 killing process with pid 2190879 00:25:50.428 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2190879 00:25:50.428 Received shutdown signal, test time was about 2.000000 seconds 00:25:50.428 00:25:50.428 Latency(us) 00:25:50.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.428 =================================================================================================================== 00:25:50.428 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:50.428 20:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2190879 00:25:50.428 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:50.428 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:50.428 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:50.428 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:50.428 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:50.428 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:50.428 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:50.428 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2191475 00:25:50.428 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2191475 /var/tmp/bperf.sock 00:25:50.428 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:50.428 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2191475 ']' 00:25:50.428 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:50.428 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:50.428 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:50.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:50.428 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:50.428 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:50.687 [2024-07-24 20:01:42.052097] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:25:50.687 [2024-07-24 20:01:42.052151] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191475 ] 00:25:50.687 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:50.687 Zero copy mechanism will not be used. 00:25:50.687 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.687 [2024-07-24 20:01:42.106266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.687 [2024-07-24 20:01:42.175051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.255 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:51.255 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:51.255 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:51.255 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:51.255 20:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:51.514 20:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.514 20:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.080 nvme0n1 00:25:52.080 20:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:52.080 20:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:52.080 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:52.080 Zero copy mechanism will not be used. 00:25:52.080 Running I/O for 2 seconds... 00:25:53.994 00:25:53.994 Latency(us) 00:25:53.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.994 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:53.994 nvme0n1 : 2.01 2364.34 295.54 0.00 0.00 6764.49 6183.18 22567.18 00:25:53.994 =================================================================================================================== 00:25:53.994 Total : 2364.34 295.54 0.00 0.00 6764.49 6183.18 22567.18 00:25:53.994 0 00:25:53.994 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:53.994 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:53.994 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:53.994 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:53.994 | select(.opcode=="crc32c") 00:25:53.994 | "\(.module_name) \(.executed)"' 00:25:53.994 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:54.255 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:54.255 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:54.255 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:54.255 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:54.255 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2191475 00:25:54.255 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2191475 ']' 00:25:54.255 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2191475 00:25:54.255 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:54.255 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:54.255 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2191475 00:25:54.255 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:54.255 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:54.255 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2191475' 00:25:54.255 killing process with pid 2191475 00:25:54.255 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2191475 00:25:54.255 Received shutdown signal, test time was about 2.000000 seconds 00:25:54.255 00:25:54.255 Latency(us) 00:25:54.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.255 =================================================================================================================== 00:25:54.255 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:54.255 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2191475 00:25:54.516 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:54.516 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:54.516 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:54.516 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:54.516 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:54.516 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:54.516 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:54.516 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2192168 00:25:54.516 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2192168 /var/tmp/bperf.sock 00:25:54.516 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:54.516 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2192168 ']' 00:25:54.516 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:54.516 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:54.516 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:54.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:54.516 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:54.516 20:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:54.516 [2024-07-24 20:01:45.996601] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:25:54.516 [2024-07-24 20:01:45.996650] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192168 ] 00:25:54.516 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.516 [2024-07-24 20:01:46.050088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.775 [2024-07-24 20:01:46.118809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.345 20:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:55.345 20:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:55.345 20:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:55.345 20:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:55.345 20:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:55.604 20:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.604 20:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.864 nvme0n1 00:25:55.864 20:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:55.864 20:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:56.123 Running I/O for 2 seconds... 00:25:58.029 00:25:58.029 Latency(us) 00:25:58.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.029 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:58.029 nvme0n1 : 2.00 26575.35 103.81 0.00 0.00 4808.27 2436.23 29861.62 00:25:58.029 =================================================================================================================== 00:25:58.029 Total : 26575.35 103.81 0.00 0.00 4808.27 2436.23 29861.62 00:25:58.029 0 00:25:58.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:58.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:58.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:58.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:58.029 | select(.opcode=="crc32c") 00:25:58.029 | "\(.module_name) \(.executed)"' 00:25:58.029 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:58.288 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:58.288 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:58.288 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:58.288 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:58.288 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2192168 00:25:58.288 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2192168 ']' 00:25:58.288 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2192168 00:25:58.288 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:58.288 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:58.288 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2192168 00:25:58.288 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:58.288 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:58.288 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2192168' 00:25:58.288 killing process with pid 2192168 00:25:58.288 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2192168 00:25:58.288 Received shutdown signal, test time was about 2.000000 seconds 00:25:58.288 00:25:58.288 Latency(us) 00:25:58.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.288 =================================================================================================================== 00:25:58.288 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:58.288 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2192168 00:25:58.548 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:58.548 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:58.548 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:58.548 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:58.548 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:58.548 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:58.548 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:58.548 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2192864 00:25:58.548 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2192864 /var/tmp/bperf.sock 00:25:58.548 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:58.548 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2192864 ']' 00:25:58.548 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:58.548 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:58.548 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:58.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:58.548 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:58.548 20:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:58.548 [2024-07-24 20:01:49.967762] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:25:58.548 [2024-07-24 20:01:49.967813] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192864 ] 00:25:58.548 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:58.548 Zero copy mechanism will not be used. 00:25:58.548 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.548 [2024-07-24 20:01:50.023014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.548 [2024-07-24 20:01:50.114852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.485 20:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:59.485 20:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:59.485 20:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:59.485 20:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:59.485 20:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:59.485 20:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.485 20:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.744 nvme0n1 00:25:59.744 20:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:59.744 20:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:59.744 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:59.744 Zero copy mechanism will not be used. 00:25:59.744 Running I/O for 2 seconds... 00:26:02.279 00:26:02.279 Latency(us) 00:26:02.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.279 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:02.279 nvme0n1 : 2.01 1521.79 190.22 0.00 0.00 10486.72 7465.41 33736.79 00:26:02.279 =================================================================================================================== 00:26:02.279 Total : 1521.79 190.22 0.00 0.00 10486.72 7465.41 33736.79 00:26:02.279 0 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:02.279 | select(.opcode=="crc32c") 00:26:02.279 | "\(.module_name) \(.executed)"' 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2192864 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2192864 ']' 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2192864 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2192864 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2192864' 00:26:02.279 killing process with pid 2192864 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2192864 00:26:02.279 Received shutdown signal, test time was about 2.000000 seconds 00:26:02.279 00:26:02.279 Latency(us) 00:26:02.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.279 =================================================================================================================== 00:26:02.279 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2192864 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2190740 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2190740 ']' 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2190740 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2190740 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:02.279 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2190740' 00:26:02.279 killing process with pid 2190740 00:26:02.280 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2190740 00:26:02.280 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2190740 00:26:02.538 00:26:02.538 real 0m16.835s 00:26:02.538 user 0m33.286s 00:26:02.538 sys 0m3.454s 00:26:02.538 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:02.538 20:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:02.538 ************************************ 00:26:02.538 END TEST nvmf_digest_clean 00:26:02.538 ************************************ 00:26:02.538 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:02.538 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:02.538 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:02.538 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:02.538 ************************************ 00:26:02.538 START TEST nvmf_digest_error 00:26:02.538 ************************************ 00:26:02.539 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:26:02.539 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:02.539 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:02.539 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:02.539 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:02.539 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2193586 00:26:02.539 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2193586 00:26:02.539 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:02.539 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2193586 ']' 00:26:02.539 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.539 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:02.539 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.539 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:02.539 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:02.539 [2024-07-24 20:01:54.082143] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:26:02.539 [2024-07-24 20:01:54.082186] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.539 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.539 [2024-07-24 20:01:54.134220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.797 [2024-07-24 20:01:54.212389] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.797 [2024-07-24 20:01:54.212426] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.797 [2024-07-24 20:01:54.212433] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.797 [2024-07-24 20:01:54.212440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.797 [2024-07-24 20:01:54.212445] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.797 [2024-07-24 20:01:54.212465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.365 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:03.365 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:03.365 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:03.365 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:03.365 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.365 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.365 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:03.365 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.365 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.365 [2024-07-24 20:01:54.938554] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:03.365 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.365 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:03.365 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:03.365 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.365 20:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.624 null0 00:26:03.624 [2024-07-24 20:01:55.030982] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.624 [2024-07-24 20:01:55.055155] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.624 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.624 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:03.624 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:03.624 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:03.624 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:03.624 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:03.624 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2193626 00:26:03.624 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:03.624 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2193626 /var/tmp/bperf.sock 00:26:03.624 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2193626 ']' 00:26:03.624 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:03.624 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:03.624 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:03.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:03.624 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:03.624 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.624 [2024-07-24 20:01:55.103009] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:26:03.624 [2024-07-24 20:01:55.103054] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193626 ] 00:26:03.624 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.624 [2024-07-24 20:01:55.156530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.881 [2024-07-24 20:01:55.236549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.449 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:04.449 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:04.449 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:04.449 20:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:04.709 20:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:04.709 20:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.709 20:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:04.709 20:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.709 20:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.709 20:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.967 nvme0n1 00:26:04.967 20:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:04.967 20:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.967 20:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:04.967 20:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.967 20:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:04.967 20:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:05.226 Running I/O for 2 seconds... 00:26:05.227 [2024-07-24 20:01:56.631608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.631641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.631651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.642858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.642882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.642892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.652420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.652442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.652451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.661382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.661403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.661411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.671449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.671471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.671480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.680102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.680123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.680132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.690466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.690487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.690500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.699171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.699192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.699201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.708778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.708799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.708808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.718117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.718138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.718147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.726974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.727007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.727015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.736886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.736907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.736915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.746759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.746780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.746788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.755606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.755627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.755635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.764803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.764823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.764831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.775026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.775056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.775065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.783329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.783350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.783359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.793677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.793698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.793707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.802693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.802714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.802722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.813009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.813029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.813037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.227 [2024-07-24 20:01:56.821330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.227 [2024-07-24 20:01:56.821351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.227 [2024-07-24 20:01:56.821360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.488 [2024-07-24 20:01:56.832001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.488 [2024-07-24 20:01:56.832023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.488 [2024-07-24 20:01:56.832031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.488 [2024-07-24 20:01:56.841014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.488 [2024-07-24 20:01:56.841035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.488 [2024-07-24 20:01:56.841050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.488 [2024-07-24 20:01:56.850716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.488 [2024-07-24 20:01:56.850739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.488 [2024-07-24 20:01:56.850747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.488 [2024-07-24 20:01:56.862069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.488 [2024-07-24 20:01:56.862091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.488 [2024-07-24 20:01:56.862100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.488 [2024-07-24 20:01:56.872260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.488 [2024-07-24 20:01:56.872282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.488 [2024-07-24 20:01:56.872290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.488 [2024-07-24 20:01:56.881495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.488 [2024-07-24 20:01:56.881518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.488 [2024-07-24 20:01:56.881527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.488 [2024-07-24 20:01:56.889832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.488 [2024-07-24 20:01:56.889854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.488 [2024-07-24 20:01:56.889864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.488 [2024-07-24 20:01:56.901674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.488 [2024-07-24 20:01:56.901695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.488 [2024-07-24 20:01:56.901704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.488 [2024-07-24 20:01:56.910134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.488 [2024-07-24 20:01:56.910156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.488 [2024-07-24 20:01:56.910164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.488 [2024-07-24 20:01:56.920102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.488 [2024-07-24 20:01:56.920122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.488 [2024-07-24 20:01:56.920130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.488 [2024-07-24 20:01:56.928651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.488 [2024-07-24 20:01:56.928671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.488 [2024-07-24 20:01:56.928680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.488 [2024-07-24 20:01:56.939074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.488 [2024-07-24 20:01:56.939094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.488 [2024-07-24 20:01:56.939106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.488 [2024-07-24 20:01:56.947157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.488 [2024-07-24 20:01:56.947178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.488 [2024-07-24 20:01:56.947185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.488 [2024-07-24 20:01:56.957974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.488 [2024-07-24 20:01:56.957997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.488 [2024-07-24 20:01:56.958005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.488 [2024-07-24 20:01:56.966613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.488 [2024-07-24 20:01:56.966633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.488 [2024-07-24 20:01:56.966642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.489 [2024-07-24 20:01:56.976960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.489 [2024-07-24 20:01:56.976981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.489 [2024-07-24 20:01:56.976989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.489 [2024-07-24 20:01:56.986226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.489 [2024-07-24 20:01:56.986246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.489 [2024-07-24 20:01:56.986255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.489 [2024-07-24 20:01:56.995212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.489 [2024-07-24 20:01:56.995232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.489 [2024-07-24 20:01:56.995240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.489 [2024-07-24 20:01:57.004848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.489 [2024-07-24 20:01:57.004868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.489 [2024-07-24 20:01:57.004877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.489 [2024-07-24 20:01:57.013892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.489 [2024-07-24 20:01:57.013913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.489 [2024-07-24 20:01:57.013922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.489 [2024-07-24 20:01:57.023514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.489 [2024-07-24 20:01:57.023538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.489 [2024-07-24 20:01:57.023547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.489 [2024-07-24 20:01:57.032660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.489 [2024-07-24 20:01:57.032681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.489 [2024-07-24 20:01:57.032690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.489 [2024-07-24 20:01:57.042624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.489 [2024-07-24 20:01:57.042645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.489 [2024-07-24 20:01:57.042653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.489 [2024-07-24 20:01:57.052438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.489 [2024-07-24 20:01:57.052459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.489 [2024-07-24 20:01:57.052469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.489 [2024-07-24 20:01:57.061455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.489 [2024-07-24 20:01:57.061475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.489 [2024-07-24 20:01:57.061484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.489 [2024-07-24 20:01:57.070331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.489 [2024-07-24 20:01:57.070352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.489 [2024-07-24 20:01:57.070360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.489 [2024-07-24 20:01:57.080616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.489 [2024-07-24 20:01:57.080637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.489 [2024-07-24 20:01:57.080647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.749 [2024-07-24 20:01:57.090348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.749 [2024-07-24 20:01:57.090370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.749 [2024-07-24 20:01:57.090379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.749 [2024-07-24 20:01:57.099102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.749 [2024-07-24 20:01:57.099122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.749 [2024-07-24 20:01:57.099134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.749 [2024-07-24 20:01:57.109157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.749 [2024-07-24 20:01:57.109178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.749 [2024-07-24 20:01:57.109186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.117470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.117490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.117498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.127426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.127446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.127455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.136233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.136253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.136262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.146732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.146753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.146762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.155507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.155528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.155537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.166291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.166311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.166319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.174262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.174282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.174291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.184294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.184317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.184326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.193329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.193349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.193358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.202412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.202432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.202440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.211841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.211862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.211871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.221826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.221846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.221854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.231226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.231246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.231254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.240325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.240344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.240352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.249810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.249830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.249838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.259550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.259570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.259579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.268141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.268162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.268171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.278274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.278295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.278304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.286947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.286968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.286977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.296625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.296645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.296654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.305900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.305920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.305928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.316168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.316188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.316196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.324516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.324536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.324544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.334197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.334217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.334225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.750 [2024-07-24 20:01:57.344981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:05.750 [2024-07-24 20:01:57.345002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.750 [2024-07-24 20:01:57.345015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.009 [2024-07-24 20:01:57.353339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.009 [2024-07-24 20:01:57.353360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.009 [2024-07-24 20:01:57.353369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.009 [2024-07-24 20:01:57.363348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.009 [2024-07-24 20:01:57.363368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.009 [2024-07-24 20:01:57.363376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.009 [2024-07-24 20:01:57.371930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.009 [2024-07-24 20:01:57.371951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.009 [2024-07-24 20:01:57.371959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.009 [2024-07-24 20:01:57.381837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.381859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.381868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.391086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.391107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.391117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.401032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.401058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.401067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.409685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.409706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.409714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.419415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.419436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.419445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.429078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.429101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.429110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.438833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.438854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.438863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.448121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.448142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.448149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.457915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.457934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.457942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.466165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.466185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.466193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.476261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.476281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.476289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.485500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.485521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.485530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.494878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.494898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.494908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.504140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.504161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.504170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.513355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.513377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.513386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.523574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.523595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.523604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.531533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.531553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.531562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.542325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.542345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.542354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.550422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.550442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.550452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.561085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.561106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.561114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.570277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.570298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.570306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.581947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.581969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.581977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.592461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.592485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.592493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.010 [2024-07-24 20:01:57.601577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.010 [2024-07-24 20:01:57.601598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.010 [2024-07-24 20:01:57.601606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.270 [2024-07-24 20:01:57.611312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.270 [2024-07-24 20:01:57.611335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.270 [2024-07-24 20:01:57.611345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.270 [2024-07-24 20:01:57.620630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.270 [2024-07-24 20:01:57.620650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.270 [2024-07-24 20:01:57.620659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.270 [2024-07-24 20:01:57.630086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.270 [2024-07-24 20:01:57.630107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.270 [2024-07-24 20:01:57.630115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.270 [2024-07-24 20:01:57.639202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.270 [2024-07-24 20:01:57.639223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.270 [2024-07-24 20:01:57.639231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.270 [2024-07-24 20:01:57.648883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.270 [2024-07-24 20:01:57.648903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.270 [2024-07-24 20:01:57.648912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.270 [2024-07-24 20:01:57.657912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.270 [2024-07-24 20:01:57.657932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.270 [2024-07-24 20:01:57.657940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.270 [2024-07-24 20:01:57.668457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.270 [2024-07-24 20:01:57.668478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.270 [2024-07-24 20:01:57.668487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.270 [2024-07-24 20:01:57.676879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.270 [2024-07-24 20:01:57.676899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.270 [2024-07-24 20:01:57.676907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.270 [2024-07-24 20:01:57.686208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.270 [2024-07-24 20:01:57.686228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.270 [2024-07-24 20:01:57.686237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.270 [2024-07-24 20:01:57.695652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.270 [2024-07-24 20:01:57.695672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.270 [2024-07-24 20:01:57.695680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.270 [2024-07-24 20:01:57.705608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.270 [2024-07-24 20:01:57.705629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.270 [2024-07-24 20:01:57.705637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.270 [2024-07-24 20:01:57.714124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.270 [2024-07-24 20:01:57.714145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.270 [2024-07-24 20:01:57.714153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.270 [2024-07-24 20:01:57.724208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.270 [2024-07-24 20:01:57.724229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.270 [2024-07-24 20:01:57.724237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.270 [2024-07-24 20:01:57.733507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.270 [2024-07-24 20:01:57.733528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.270 [2024-07-24 20:01:57.733537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.270 [2024-07-24 20:01:57.742564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.271 [2024-07-24 20:01:57.742585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.271 [2024-07-24 20:01:57.742593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.271 [2024-07-24 20:01:57.751621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.271 [2024-07-24 20:01:57.751642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.271 [2024-07-24 20:01:57.751654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.271 [2024-07-24 20:01:57.761942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.271 [2024-07-24 20:01:57.761962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.271 [2024-07-24 20:01:57.761971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.271 [2024-07-24 20:01:57.770458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.271 [2024-07-24 20:01:57.770479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.271 [2024-07-24 20:01:57.770488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.271 [2024-07-24 20:01:57.780892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.271 [2024-07-24 20:01:57.780912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.271 [2024-07-24 20:01:57.780922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.271 [2024-07-24 20:01:57.790211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.271 [2024-07-24 20:01:57.790232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.271 [2024-07-24 20:01:57.790240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.271 [2024-07-24 20:01:57.799481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.271 [2024-07-24 20:01:57.799502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.271 [2024-07-24 20:01:57.799511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.271 [2024-07-24 20:01:57.808711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.271 [2024-07-24 20:01:57.808732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.271 [2024-07-24 20:01:57.808741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.271 [2024-07-24 20:01:57.816973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.271 [2024-07-24 20:01:57.816993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.271 [2024-07-24 20:01:57.817002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.271 [2024-07-24 20:01:57.827782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.271 [2024-07-24 20:01:57.827803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.271 [2024-07-24 20:01:57.827811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.271 [2024-07-24 20:01:57.837029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.271 [2024-07-24 20:01:57.837061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.271 [2024-07-24 20:01:57.837071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.271 [2024-07-24 20:01:57.845991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.271 [2024-07-24 20:01:57.846011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.271 [2024-07-24 20:01:57.846019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.271 [2024-07-24 20:01:57.856307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.271 [2024-07-24 20:01:57.856327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.271 [2024-07-24 20:01:57.856337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.271 [2024-07-24 20:01:57.865019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.271 [2024-07-24 20:01:57.865039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.271 [2024-07-24 20:01:57.865053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.531 [2024-07-24 20:01:57.874930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.531 [2024-07-24 20:01:57.874952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.531 [2024-07-24 20:01:57.874961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.531 [2024-07-24 20:01:57.883916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.531 [2024-07-24 20:01:57.883937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.531 [2024-07-24 20:01:57.883945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.531 [2024-07-24 20:01:57.894221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.531 [2024-07-24 20:01:57.894242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.531 [2024-07-24 20:01:57.894251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.531 [2024-07-24 20:01:57.902480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.531 [2024-07-24 20:01:57.902502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.531 [2024-07-24 20:01:57.902511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.531 [2024-07-24 20:01:57.912757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.531 [2024-07-24 20:01:57.912778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.531 [2024-07-24 20:01:57.912787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.531 [2024-07-24 20:01:57.921620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.531 [2024-07-24 20:01:57.921643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.531 [2024-07-24 20:01:57.921653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.531 [2024-07-24 20:01:57.932769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.531 [2024-07-24 20:01:57.932791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:57.932800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:57.941227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:57.941249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:57.941257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:57.951512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:57.951534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:57.951542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:57.960877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:57.960897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:57.960906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:57.969481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:57.969502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:57.969511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:57.978935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:57.978956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:57.978965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:57.988854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:57.988875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:57.988884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:57.998437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:57.998458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:57.998471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:58.007540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:58.007560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:58.007570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:58.016454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:58.016479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:58.016488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:58.027011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:58.027032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:58.027041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:58.035027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:58.035054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:58.035064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:58.045106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:58.045126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:58.045135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:58.054665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:58.054685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:58.054693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:58.063950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:58.063971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:58.063979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:58.073598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:58.073619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:58.073627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:58.083156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:58.083177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:58.083186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:58.092252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:58.092274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:58.092283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:58.101081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:58.101102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:58.101111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:58.111828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:58.111849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:58.111858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.532 [2024-07-24 20:01:58.120247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.532 [2024-07-24 20:01:58.120267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.532 [2024-07-24 20:01:58.120276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-24 20:01:58.130623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.792 [2024-07-24 20:01:58.130645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-24 20:01:58.130654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-24 20:01:58.139208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.792 [2024-07-24 20:01:58.139229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-24 20:01:58.139240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-24 20:01:58.149554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.792 [2024-07-24 20:01:58.149575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-24 20:01:58.149584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-24 20:01:58.158313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.792 [2024-07-24 20:01:58.158334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-24 20:01:58.158347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.167528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.167549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.167558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.177049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.177071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.177080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.186296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.186316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.186325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.195510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.195531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.195540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.206477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.206499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.206508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.214752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.214773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.214781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.224531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.224552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.224560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.233525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.233546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.233554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.243909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.243933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.243941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.253977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.253997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.254006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.262164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.262184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.262193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.272656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.272677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.272685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.281354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.281375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.281384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.291231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.291252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.291261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.300024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.300051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.300060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.309055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.309076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.309084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.318641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.318662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.318671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.327249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.327270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.327278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.341530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.341551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.341559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.353788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.353808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.353817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.362098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.362119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.362128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.371518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.371538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.371547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.793 [2024-07-24 20:01:58.380845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:06.793 [2024-07-24 20:01:58.380865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.793 [2024-07-24 20:01:58.380873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.391492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.391524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.391533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.404147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.404167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.404177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.414373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.414393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.414405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.423563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.423584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.423592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.433793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.433814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.433822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.444562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.444583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.444591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.454376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.454396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.454404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.463240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.463260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.463268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.473617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.473637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.473646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.483137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.483157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.483165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.493970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.493991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.494000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.503471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.503492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.503500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.513247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.513268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.513276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.521640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.521660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.521668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.531489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.531509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.531518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.545210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.545230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.545238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.554578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.554599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.554607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.563376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.563396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.563405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.572867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.054 [2024-07-24 20:01:58.572889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.054 [2024-07-24 20:01:58.572897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.054 [2024-07-24 20:01:58.582263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.055 [2024-07-24 20:01:58.582283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.055 [2024-07-24 20:01:58.582295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.055 [2024-07-24 20:01:58.592197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.055 [2024-07-24 20:01:58.592217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.055 [2024-07-24 20:01:58.592225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.055 [2024-07-24 20:01:58.600353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.055 [2024-07-24 20:01:58.600374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.055 [2024-07-24 20:01:58.600382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.055 [2024-07-24 20:01:58.611283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d24f0) 00:26:07.055 [2024-07-24 20:01:58.611303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.055 [2024-07-24 20:01:58.611311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.315 00:26:07.315 Latency(us) 00:26:07.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.315 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:07.315 nvme0n1 : 2.04 25968.79 101.44 0.00 0.00 4826.98 2379.24 47413.87 00:26:07.315 =================================================================================================================== 00:26:07.315 Total : 25968.79 101.44 0.00 0.00 4826.98 2379.24 47413.87 00:26:07.315 0 00:26:07.315 20:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:07.315 20:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:07.315 20:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:07.315 | .driver_specific 00:26:07.315 | .nvme_error 00:26:07.315 | .status_code 00:26:07.315 | .command_transient_transport_error' 00:26:07.315 20:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:07.315 20:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:26:07.315 20:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2193626 00:26:07.315 20:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2193626 ']' 00:26:07.315 20:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2193626 00:26:07.315 20:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:07.315 20:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:07.315 20:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2193626 00:26:07.315 20:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:07.315 20:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:07.315 20:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2193626' 00:26:07.315 killing process with pid 2193626 00:26:07.315 20:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2193626 00:26:07.315 Received shutdown signal, test time was about 2.000000 seconds 00:26:07.315 00:26:07.315 Latency(us) 00:26:07.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.315 =================================================================================================================== 00:26:07.315 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:07.315 20:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2193626 00:26:07.573 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:07.573 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:07.573 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:07.573 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:07.573 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:07.573 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2194321 00:26:07.573 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2194321 /var/tmp/bperf.sock 00:26:07.573 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2194321 ']' 00:26:07.573 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:07.573 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:07.573 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:07.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:07.573 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:07.573 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:07.573 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:07.573 [2024-07-24 20:01:59.119128] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:26:07.573 [2024-07-24 20:01:59.119179] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2194321 ] 00:26:07.573 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:07.573 Zero copy mechanism will not be used. 00:26:07.573 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.832 [2024-07-24 20:01:59.173250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.832 [2024-07-24 20:01:59.242294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.444 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:08.444 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:08.444 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:08.444 20:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:08.704 20:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:08.704 20:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.704 20:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:08.704 20:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.704 20:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.704 20:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.963 nvme0n1 00:26:08.963 20:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:08.963 20:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.963 20:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:08.964 20:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.964 20:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:08.964 20:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:08.964 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:08.964 Zero copy mechanism will not be used. 00:26:08.964 Running I/O for 2 seconds... 00:26:08.964 [2024-07-24 20:02:00.464760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:08.964 [2024-07-24 20:02:00.464793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.964 [2024-07-24 20:02:00.464803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.964 [2024-07-24 20:02:00.480431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:08.964 [2024-07-24 20:02:00.480458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.964 [2024-07-24 20:02:00.480467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.964 [2024-07-24 20:02:00.501024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:08.964 [2024-07-24 20:02:00.501053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.964 [2024-07-24 20:02:00.501063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.964 [2024-07-24 20:02:00.519510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:08.964 [2024-07-24 20:02:00.519531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.964 [2024-07-24 20:02:00.519540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.964 [2024-07-24 20:02:00.539795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:08.964 [2024-07-24 20:02:00.539816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.964 [2024-07-24 20:02:00.539824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.560740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.560767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.224 [2024-07-24 20:02:00.560776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.576983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.577005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.224 [2024-07-24 20:02:00.577013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.591196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.591216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.224 [2024-07-24 20:02:00.591224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.604636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.604656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.224 [2024-07-24 20:02:00.604664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.618125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.618146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.224 [2024-07-24 20:02:00.618154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.632413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.632433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.224 [2024-07-24 20:02:00.632441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.645398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.645419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.224 [2024-07-24 20:02:00.645427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.658295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.658317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.224 [2024-07-24 20:02:00.658326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.671563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.671585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.224 [2024-07-24 20:02:00.671597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.685551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.685571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.224 [2024-07-24 20:02:00.685579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.699713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.699733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.224 [2024-07-24 20:02:00.699741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.713477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.713498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.224 [2024-07-24 20:02:00.713506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.733446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.733466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.224 [2024-07-24 20:02:00.733475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.750676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.750696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.224 [2024-07-24 20:02:00.750705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.764523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.764546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.224 [2024-07-24 20:02:00.764554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.779678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.779700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.224 [2024-07-24 20:02:00.779709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.224 [2024-07-24 20:02:00.795090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.224 [2024-07-24 20:02:00.795111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.225 [2024-07-24 20:02:00.795119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.225 [2024-07-24 20:02:00.808168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.225 [2024-07-24 20:02:00.808196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.225 [2024-07-24 20:02:00.808204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.484 [2024-07-24 20:02:00.821493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.484 [2024-07-24 20:02:00.821516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.484 [2024-07-24 20:02:00.821524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.484 [2024-07-24 20:02:00.834601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.484 [2024-07-24 20:02:00.834622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.484 [2024-07-24 20:02:00.834630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.484 [2024-07-24 20:02:00.847806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.484 [2024-07-24 20:02:00.847826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.484 [2024-07-24 20:02:00.847834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.484 [2024-07-24 20:02:00.860922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.484 [2024-07-24 20:02:00.860942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.484 [2024-07-24 20:02:00.860950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.484 [2024-07-24 20:02:00.873960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.484 [2024-07-24 20:02:00.873981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.484 [2024-07-24 20:02:00.873988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.484 [2024-07-24 20:02:00.887010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.484 [2024-07-24 20:02:00.887030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.484 [2024-07-24 20:02:00.887038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.484 [2024-07-24 20:02:00.900247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.484 [2024-07-24 20:02:00.900267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.484 [2024-07-24 20:02:00.900276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.485 [2024-07-24 20:02:00.913300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.485 [2024-07-24 20:02:00.913320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.485 [2024-07-24 20:02:00.913328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.485 [2024-07-24 20:02:00.926322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.485 [2024-07-24 20:02:00.926342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.485 [2024-07-24 20:02:00.926350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.485 [2024-07-24 20:02:00.939637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.485 [2024-07-24 20:02:00.939657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.485 [2024-07-24 20:02:00.939665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.485 [2024-07-24 20:02:00.952667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.485 [2024-07-24 20:02:00.952687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.485 [2024-07-24 20:02:00.952696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.485 [2024-07-24 20:02:00.965793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.485 [2024-07-24 20:02:00.965813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.485 [2024-07-24 20:02:00.965821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.485 [2024-07-24 20:02:00.979098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.485 [2024-07-24 20:02:00.979118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.485 [2024-07-24 20:02:00.979126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.485 [2024-07-24 20:02:00.992152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.485 [2024-07-24 20:02:00.992173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.485 [2024-07-24 20:02:00.992180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.485 [2024-07-24 20:02:01.005343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.485 [2024-07-24 20:02:01.005364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.485 [2024-07-24 20:02:01.005373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.485 [2024-07-24 20:02:01.018353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.485 [2024-07-24 20:02:01.018373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.485 [2024-07-24 20:02:01.018381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.485 [2024-07-24 20:02:01.031584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.485 [2024-07-24 20:02:01.031605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.485 [2024-07-24 20:02:01.031616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.485 [2024-07-24 20:02:01.044641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.485 [2024-07-24 20:02:01.044661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.485 [2024-07-24 20:02:01.044669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.485 [2024-07-24 20:02:01.058050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.485 [2024-07-24 20:02:01.058071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.485 [2024-07-24 20:02:01.058079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.485 [2024-07-24 20:02:01.071223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.485 [2024-07-24 20:02:01.071244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.485 [2024-07-24 20:02:01.071251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.745 [2024-07-24 20:02:01.084616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.745 [2024-07-24 20:02:01.084637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.745 [2024-07-24 20:02:01.084646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.745 [2024-07-24 20:02:01.098211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.745 [2024-07-24 20:02:01.098233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.745 [2024-07-24 20:02:01.098241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.745 [2024-07-24 20:02:01.111644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.745 [2024-07-24 20:02:01.111665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.745 [2024-07-24 20:02:01.111673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.745 [2024-07-24 20:02:01.125165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.745 [2024-07-24 20:02:01.125185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.745 [2024-07-24 20:02:01.125193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.745 [2024-07-24 20:02:01.138144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.745 [2024-07-24 20:02:01.138165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.745 [2024-07-24 20:02:01.138173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.746 [2024-07-24 20:02:01.151410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.746 [2024-07-24 20:02:01.151434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-07-24 20:02:01.151442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.746 [2024-07-24 20:02:01.164668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.746 [2024-07-24 20:02:01.164689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-07-24 20:02:01.164697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.746 [2024-07-24 20:02:01.177813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.746 [2024-07-24 20:02:01.177833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-07-24 20:02:01.177842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.746 [2024-07-24 20:02:01.190800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.746 [2024-07-24 20:02:01.190820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-07-24 20:02:01.190828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.746 [2024-07-24 20:02:01.203801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.746 [2024-07-24 20:02:01.203822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-07-24 20:02:01.203830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.746 [2024-07-24 20:02:01.216903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.746 [2024-07-24 20:02:01.216924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-07-24 20:02:01.216932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.746 [2024-07-24 20:02:01.229982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.746 [2024-07-24 20:02:01.230002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-07-24 20:02:01.230010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.746 [2024-07-24 20:02:01.243126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.746 [2024-07-24 20:02:01.243146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-07-24 20:02:01.243154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.746 [2024-07-24 20:02:01.256140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.746 [2024-07-24 20:02:01.256166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-07-24 20:02:01.256178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.746 [2024-07-24 20:02:01.269180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.746 [2024-07-24 20:02:01.269201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-07-24 20:02:01.269209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.746 [2024-07-24 20:02:01.282187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.746 [2024-07-24 20:02:01.282208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-07-24 20:02:01.282216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.746 [2024-07-24 20:02:01.295196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.746 [2024-07-24 20:02:01.295216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-07-24 20:02:01.295224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.746 [2024-07-24 20:02:01.308166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.746 [2024-07-24 20:02:01.308187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-07-24 20:02:01.308194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.746 [2024-07-24 20:02:01.321025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.746 [2024-07-24 20:02:01.321050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-07-24 20:02:01.321059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.746 [2024-07-24 20:02:01.333977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:09.746 [2024-07-24 20:02:01.333997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-07-24 20:02:01.334004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.005 [2024-07-24 20:02:01.347015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.005 [2024-07-24 20:02:01.347036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-07-24 20:02:01.347051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.005 [2024-07-24 20:02:01.360104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.005 [2024-07-24 20:02:01.360124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-07-24 20:02:01.360140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.005 [2024-07-24 20:02:01.373351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.005 [2024-07-24 20:02:01.373375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-07-24 20:02:01.373383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.005 [2024-07-24 20:02:01.386329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.005 [2024-07-24 20:02:01.386349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-07-24 20:02:01.386357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.006 [2024-07-24 20:02:01.399422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.006 [2024-07-24 20:02:01.399442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.006 [2024-07-24 20:02:01.399450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.006 [2024-07-24 20:02:01.412312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.006 [2024-07-24 20:02:01.412333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.006 [2024-07-24 20:02:01.412340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.006 [2024-07-24 20:02:01.425591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.006 [2024-07-24 20:02:01.425611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.006 [2024-07-24 20:02:01.425619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.006 [2024-07-24 20:02:01.438521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.006 [2024-07-24 20:02:01.438541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.006 [2024-07-24 20:02:01.438549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.006 [2024-07-24 20:02:01.451660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.006 [2024-07-24 20:02:01.451679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.006 [2024-07-24 20:02:01.451687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.006 [2024-07-24 20:02:01.464485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.006 [2024-07-24 20:02:01.464505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.006 [2024-07-24 20:02:01.464514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.006 [2024-07-24 20:02:01.477354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.006 [2024-07-24 20:02:01.477376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.006 [2024-07-24 20:02:01.477384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.006 [2024-07-24 20:02:01.490289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.006 [2024-07-24 20:02:01.490309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.006 [2024-07-24 20:02:01.490317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.006 [2024-07-24 20:02:01.503138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.006 [2024-07-24 20:02:01.503158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.006 [2024-07-24 20:02:01.503166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.006 [2024-07-24 20:02:01.516091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.006 [2024-07-24 20:02:01.516112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.006 [2024-07-24 20:02:01.516120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.006 [2024-07-24 20:02:01.529212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.006 [2024-07-24 20:02:01.529233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.006 [2024-07-24 20:02:01.529241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.006 [2024-07-24 20:02:01.542188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.006 [2024-07-24 20:02:01.542208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.006 [2024-07-24 20:02:01.542216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.006 [2024-07-24 20:02:01.555236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.006 [2024-07-24 20:02:01.555256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.006 [2024-07-24 20:02:01.555263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.006 [2024-07-24 20:02:01.568212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.006 [2024-07-24 20:02:01.568232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.006 [2024-07-24 20:02:01.568241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.006 [2024-07-24 20:02:01.581204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.006 [2024-07-24 20:02:01.581225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.006 [2024-07-24 20:02:01.581233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.006 [2024-07-24 20:02:01.594184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.006 [2024-07-24 20:02:01.594205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.006 [2024-07-24 20:02:01.594216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.266 [2024-07-24 20:02:01.607449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.266 [2024-07-24 20:02:01.607470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.266 [2024-07-24 20:02:01.607478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.266 [2024-07-24 20:02:01.620352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.266 [2024-07-24 20:02:01.620373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.266 [2024-07-24 20:02:01.620380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.266 [2024-07-24 20:02:01.633357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.266 [2024-07-24 20:02:01.633377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.266 [2024-07-24 20:02:01.633385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.266 [2024-07-24 20:02:01.646228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.266 [2024-07-24 20:02:01.646248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.266 [2024-07-24 20:02:01.646256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.266 [2024-07-24 20:02:01.659270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.266 [2024-07-24 20:02:01.659290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.266 [2024-07-24 20:02:01.659298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.266 [2024-07-24 20:02:01.672353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.266 [2024-07-24 20:02:01.672376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.266 [2024-07-24 20:02:01.672385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.266 [2024-07-24 20:02:01.685450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.266 [2024-07-24 20:02:01.685471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.266 [2024-07-24 20:02:01.685479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.266 [2024-07-24 20:02:01.698434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.266 [2024-07-24 20:02:01.698455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.266 [2024-07-24 20:02:01.698463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.266 [2024-07-24 20:02:01.711430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.266 [2024-07-24 20:02:01.711450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.267 [2024-07-24 20:02:01.711458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.267 [2024-07-24 20:02:01.724449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.267 [2024-07-24 20:02:01.724469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.267 [2024-07-24 20:02:01.724477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.267 [2024-07-24 20:02:01.737416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.267 [2024-07-24 20:02:01.737436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.267 [2024-07-24 20:02:01.737444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.267 [2024-07-24 20:02:01.750582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.267 [2024-07-24 20:02:01.750603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.267 [2024-07-24 20:02:01.750611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.267 [2024-07-24 20:02:01.763589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.267 [2024-07-24 20:02:01.763609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.267 [2024-07-24 20:02:01.763617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.267 [2024-07-24 20:02:01.776495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.267 [2024-07-24 20:02:01.776516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.267 [2024-07-24 20:02:01.776525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.267 [2024-07-24 20:02:01.789409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.267 [2024-07-24 20:02:01.789430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.267 [2024-07-24 20:02:01.789438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.267 [2024-07-24 20:02:01.802353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.267 [2024-07-24 20:02:01.802373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.267 [2024-07-24 20:02:01.802381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.267 [2024-07-24 20:02:01.815129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.267 [2024-07-24 20:02:01.815149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.267 [2024-07-24 20:02:01.815160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.267 [2024-07-24 20:02:01.828120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.267 [2024-07-24 20:02:01.828141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.267 [2024-07-24 20:02:01.828149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.267 [2024-07-24 20:02:01.841080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.267 [2024-07-24 20:02:01.841100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.267 [2024-07-24 20:02:01.841108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.267 [2024-07-24 20:02:01.853943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.267 [2024-07-24 20:02:01.853963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.267 [2024-07-24 20:02:01.853971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:01.866959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:01.866980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:01.866989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:01.879935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:01.879956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:01.879964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:01.892680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:01.892700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:01.892708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:01.905619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:01.905641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:01.905649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:01.918523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:01.918544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:01.918552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:01.931473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:01.931497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:01.931505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:01.944529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:01.944549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:01.944557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:01.957411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:01.957431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:01.957439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:01.970224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:01.970244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:01.970252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:01.983127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:01.983147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:01.983155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:01.996026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:01.996052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:01.996060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:02.008878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:02.008898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:02.008905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:02.021977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:02.021997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:02.022004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:02.034878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:02.034898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:02.034906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:02.047887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:02.047907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:02.047916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:02.060838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:02.060858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:02.060865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:02.073916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:02.073936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:02.073944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:02.087106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:02.087125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:02.087133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:02.100002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:02.100023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:02.100032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.526 [2024-07-24 20:02:02.112935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.526 [2024-07-24 20:02:02.112956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.526 [2024-07-24 20:02:02.112964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.786 [2024-07-24 20:02:02.125919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.786 [2024-07-24 20:02:02.125940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.786 [2024-07-24 20:02:02.125949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.786 [2024-07-24 20:02:02.139002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.786 [2024-07-24 20:02:02.139022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.786 [2024-07-24 20:02:02.139031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.786 [2024-07-24 20:02:02.151854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.786 [2024-07-24 20:02:02.151875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.786 [2024-07-24 20:02:02.151886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.786 [2024-07-24 20:02:02.164854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.786 [2024-07-24 20:02:02.164874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.786 [2024-07-24 20:02:02.164883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.786 [2024-07-24 20:02:02.177913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.786 [2024-07-24 20:02:02.177933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.786 [2024-07-24 20:02:02.177941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.786 [2024-07-24 20:02:02.190939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.786 [2024-07-24 20:02:02.190960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.786 [2024-07-24 20:02:02.190969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.786 [2024-07-24 20:02:02.203953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.786 [2024-07-24 20:02:02.203974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.787 [2024-07-24 20:02:02.203983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.787 [2024-07-24 20:02:02.216988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.787 [2024-07-24 20:02:02.217010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.787 [2024-07-24 20:02:02.217018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.787 [2024-07-24 20:02:02.229980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.787 [2024-07-24 20:02:02.230002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.787 [2024-07-24 20:02:02.230011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.787 [2024-07-24 20:02:02.243002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.787 [2024-07-24 20:02:02.243023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.787 [2024-07-24 20:02:02.243031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.787 [2024-07-24 20:02:02.256112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.787 [2024-07-24 20:02:02.256132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.787 [2024-07-24 20:02:02.256140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.787 [2024-07-24 20:02:02.269242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.787 [2024-07-24 20:02:02.269262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.787 [2024-07-24 20:02:02.269270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.787 [2024-07-24 20:02:02.282469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.787 [2024-07-24 20:02:02.282489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.787 [2024-07-24 20:02:02.282497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.787 [2024-07-24 20:02:02.295464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.787 [2024-07-24 20:02:02.295484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.787 [2024-07-24 20:02:02.295491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.787 [2024-07-24 20:02:02.308567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.787 [2024-07-24 20:02:02.308587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.787 [2024-07-24 20:02:02.308595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.787 [2024-07-24 20:02:02.321902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.787 [2024-07-24 20:02:02.321922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.787 [2024-07-24 20:02:02.321930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.787 [2024-07-24 20:02:02.335179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.787 [2024-07-24 20:02:02.335199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.787 [2024-07-24 20:02:02.335207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.787 [2024-07-24 20:02:02.348144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.787 [2024-07-24 20:02:02.348164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.787 [2024-07-24 20:02:02.348172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.787 [2024-07-24 20:02:02.361134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.787 [2024-07-24 20:02:02.361155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.787 [2024-07-24 20:02:02.361162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.787 [2024-07-24 20:02:02.374138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:10.787 [2024-07-24 20:02:02.374159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.787 [2024-07-24 20:02:02.374170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.047 [2024-07-24 20:02:02.387159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:11.047 [2024-07-24 20:02:02.387180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-07-24 20:02:02.387188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.047 [2024-07-24 20:02:02.400349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:11.047 [2024-07-24 20:02:02.400370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-07-24 20:02:02.400378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.047 [2024-07-24 20:02:02.413399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:11.047 [2024-07-24 20:02:02.413419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-07-24 20:02:02.413427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.047 [2024-07-24 20:02:02.426376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:11.047 [2024-07-24 20:02:02.426397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-07-24 20:02:02.426405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.047 [2024-07-24 20:02:02.439342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d70030) 00:26:11.047 [2024-07-24 20:02:02.439362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.047 [2024-07-24 20:02:02.439370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.047 00:26:11.047 Latency(us) 00:26:11.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.047 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:11.047 nvme0n1 : 2.00 2290.78 286.35 0.00 0.00 6979.61 6325.65 23592.96 00:26:11.047 =================================================================================================================== 00:26:11.047 Total : 2290.78 286.35 0.00 0.00 6979.61 6325.65 23592.96 00:26:11.047 0 00:26:11.047 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:11.047 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:11.047 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:11.047 | .driver_specific 00:26:11.047 | .nvme_error 00:26:11.047 | .status_code 00:26:11.047 | .command_transient_transport_error' 00:26:11.047 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 148 > 0 )) 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2194321 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2194321 ']' 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2194321 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2194321 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2194321' 00:26:11.307 killing process with pid 2194321 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2194321 00:26:11.307 Received shutdown signal, test time was about 2.000000 seconds 00:26:11.307 00:26:11.307 Latency(us) 00:26:11.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.307 =================================================================================================================== 00:26:11.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2194321 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2195012 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2195012 /var/tmp/bperf.sock 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2195012 ']' 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:11.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:11.307 20:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:11.567 [2024-07-24 20:02:02.916881] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:26:11.567 [2024-07-24 20:02:02.916926] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2195012 ] 00:26:11.567 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.567 [2024-07-24 20:02:02.970631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.567 [2024-07-24 20:02:03.038877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.142 20:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:12.142 20:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:12.142 20:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:12.142 20:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:12.403 20:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:12.403 20:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.403 20:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:12.403 20:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.403 20:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.403 20:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.663 nvme0n1 00:26:12.663 20:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:12.663 20:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.663 20:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:12.663 20:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.663 20:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:12.663 20:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:12.663 Running I/O for 2 seconds... 00:26:12.924 [2024-07-24 20:02:04.277681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190fda78 00:26:12.924 [2024-07-24 20:02:04.278297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.924 [2024-07-24 20:02:04.278326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:12.924 [2024-07-24 20:02:04.287350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.924 [2024-07-24 20:02:04.287540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.924 [2024-07-24 20:02:04.287563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.924 [2024-07-24 20:02:04.297064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.924 [2024-07-24 20:02:04.297254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.924 [2024-07-24 20:02:04.297276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.924 [2024-07-24 20:02:04.306928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.924 [2024-07-24 20:02:04.307139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.924 [2024-07-24 20:02:04.307163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.924 [2024-07-24 20:02:04.317023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.924 [2024-07-24 20:02:04.317231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.924 [2024-07-24 20:02:04.317250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.924 [2024-07-24 20:02:04.326892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.924 [2024-07-24 20:02:04.327093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.924 [2024-07-24 20:02:04.327112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.336651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.925 [2024-07-24 20:02:04.336844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.336862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.346390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.925 [2024-07-24 20:02:04.346590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.346608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.356228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.925 [2024-07-24 20:02:04.356430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.356449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.365911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.925 [2024-07-24 20:02:04.366112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.366130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.375828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.925 [2024-07-24 20:02:04.376041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.376065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.385565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.925 [2024-07-24 20:02:04.385769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.385787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.395284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.925 [2024-07-24 20:02:04.395491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.395509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.404924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.925 [2024-07-24 20:02:04.405393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.405412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.414674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.925 [2024-07-24 20:02:04.414872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.414891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.424637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.925 [2024-07-24 20:02:04.424990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.425010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.434373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.925 [2024-07-24 20:02:04.434827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.434846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.444103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.925 [2024-07-24 20:02:04.444285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.444302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.453669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f6458 00:26:12.925 [2024-07-24 20:02:04.455918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.455937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.465137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190fdeb0 00:26:12.925 [2024-07-24 20:02:04.466106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.466125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.474902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190fdeb0 00:26:12.925 [2024-07-24 20:02:04.475130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.475149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.484631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190fdeb0 00:26:12.925 [2024-07-24 20:02:04.484856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.484875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.495543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190fb8b8 00:26:12.925 [2024-07-24 20:02:04.496678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.496696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.506130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f2948 00:26:12.925 [2024-07-24 20:02:04.507402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.507421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:12.925 [2024-07-24 20:02:04.515580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f7da8 00:26:12.925 [2024-07-24 20:02:04.515835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.925 [2024-07-24 20:02:04.515854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.186 [2024-07-24 20:02:04.526362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190fa7d8 00:26:13.186 [2024-07-24 20:02:04.527602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.186 [2024-07-24 20:02:04.527621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.186 [2024-07-24 20:02:04.536683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f7da8 00:26:13.186 [2024-07-24 20:02:04.537898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.186 [2024-07-24 20:02:04.537918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:13.186 [2024-07-24 20:02:04.546195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f57b0 00:26:13.186 [2024-07-24 20:02:04.547329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.186 [2024-07-24 20:02:04.547348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:13.186 [2024-07-24 20:02:04.555621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eb760 00:26:13.186 [2024-07-24 20:02:04.556808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.186 [2024-07-24 20:02:04.556827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:13.186 [2024-07-24 20:02:04.565030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f7da8 00:26:13.186 [2024-07-24 20:02:04.566255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.186 [2024-07-24 20:02:04.566277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:13.186 [2024-07-24 20:02:04.574123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190fef90 00:26:13.186 [2024-07-24 20:02:04.576770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.186 [2024-07-24 20:02:04.576789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:13.186 [2024-07-24 20:02:04.588325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190feb58 00:26:13.186 [2024-07-24 20:02:04.589081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.186 [2024-07-24 20:02:04.589101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.186 [2024-07-24 20:02:04.597908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190fda78 00:26:13.187 [2024-07-24 20:02:04.598121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.598139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.607608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190fda78 00:26:13.187 [2024-07-24 20:02:04.607792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.607810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.617238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190fda78 00:26:13.187 [2024-07-24 20:02:04.617897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.617916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.626970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190fda78 00:26:13.187 [2024-07-24 20:02:04.627735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.627754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.636668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190fda78 00:26:13.187 [2024-07-24 20:02:04.637240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.637259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.648406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190f5378 00:26:13.187 [2024-07-24 20:02:04.649637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.649655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.660429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190feb58 00:26:13.187 [2024-07-24 20:02:04.661264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.661283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.669518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eaab8 00:26:13.187 [2024-07-24 20:02:04.671123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.671142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.681694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ebb98 00:26:13.187 [2024-07-24 20:02:04.682909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.682928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.691366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee190 00:26:13.187 [2024-07-24 20:02:04.691986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.692005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.701054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee190 00:26:13.187 [2024-07-24 20:02:04.701253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.701272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.710737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee190 00:26:13.187 [2024-07-24 20:02:04.710938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.710957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.720418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee190 00:26:13.187 [2024-07-24 20:02:04.720627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.720645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.730094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee190 00:26:13.187 [2024-07-24 20:02:04.730313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.730332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.739804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee190 00:26:13.187 [2024-07-24 20:02:04.740020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.740038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.749496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee190 00:26:13.187 [2024-07-24 20:02:04.749694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.749713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.759157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee190 00:26:13.187 [2024-07-24 20:02:04.759385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.759403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.768902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee190 00:26:13.187 [2024-07-24 20:02:04.769244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.769263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:13.187 [2024-07-24 20:02:04.778701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee190 00:26:13.187 [2024-07-24 20:02:04.778902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.187 [2024-07-24 20:02:04.778920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:13.456 [2024-07-24 20:02:04.788674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee190 00:26:13.456 [2024-07-24 20:02:04.788874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.788900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.798558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee190 00:26:13.457 [2024-07-24 20:02:04.798969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.798988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.808372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee190 00:26:13.457 [2024-07-24 20:02:04.808571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.808588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.821079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee190 00:26:13.457 [2024-07-24 20:02:04.822262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.822281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.832857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ef6a8 00:26:13.457 [2024-07-24 20:02:04.833862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.833885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.842550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee5c8 00:26:13.457 [2024-07-24 20:02:04.842922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.842941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.852281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee5c8 00:26:13.457 [2024-07-24 20:02:04.852476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.852494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.861959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee5c8 00:26:13.457 [2024-07-24 20:02:04.862347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.862366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.871700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee5c8 00:26:13.457 [2024-07-24 20:02:04.871891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.871909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.881408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee5c8 00:26:13.457 [2024-07-24 20:02:04.881602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.881622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.890995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee5c8 00:26:13.457 [2024-07-24 20:02:04.891228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.891248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.900690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee5c8 00:26:13.457 [2024-07-24 20:02:04.901293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.901312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.910390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee5c8 00:26:13.457 [2024-07-24 20:02:04.910966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.910984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.920080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee5c8 00:26:13.457 [2024-07-24 20:02:04.920669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.920687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.929716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee5c8 00:26:13.457 [2024-07-24 20:02:04.929985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.930005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.939393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee5c8 00:26:13.457 [2024-07-24 20:02:04.939781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.939800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.949178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee5c8 00:26:13.457 [2024-07-24 20:02:04.949369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.949387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.958847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee5c8 00:26:13.457 [2024-07-24 20:02:04.959325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.959344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.968524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee5c8 00:26:13.457 [2024-07-24 20:02:04.968717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.968736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.978210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ee5c8 00:26:13.457 [2024-07-24 20:02:04.978402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.978420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:04.988200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ef270 00:26:13.457 [2024-07-24 20:02:04.991046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:04.991065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:05.001782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190ef6a8 00:26:13.457 [2024-07-24 20:02:05.002725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:05.002744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:05.011517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.457 [2024-07-24 20:02:05.012031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:05.012055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:05.021224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.457 [2024-07-24 20:02:05.021456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:05.021474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:05.030879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.457 [2024-07-24 20:02:05.031124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:05.031142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:05.040826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.457 [2024-07-24 20:02:05.041075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:05.041094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.457 [2024-07-24 20:02:05.050741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.457 [2024-07-24 20:02:05.050996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.457 [2024-07-24 20:02:05.051016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.718 [2024-07-24 20:02:05.060630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.718 [2024-07-24 20:02:05.060880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.718 [2024-07-24 20:02:05.060900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.718 [2024-07-24 20:02:05.070313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.718 [2024-07-24 20:02:05.070558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.718 [2024-07-24 20:02:05.070576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.718 [2024-07-24 20:02:05.080250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.718 [2024-07-24 20:02:05.080497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.718 [2024-07-24 20:02:05.080515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.718 [2024-07-24 20:02:05.090207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.718 [2024-07-24 20:02:05.090454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.718 [2024-07-24 20:02:05.090475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.718 [2024-07-24 20:02:05.099877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.718 [2024-07-24 20:02:05.100122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.718 [2024-07-24 20:02:05.100141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.718 [2024-07-24 20:02:05.109612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.718 [2024-07-24 20:02:05.109860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.718 [2024-07-24 20:02:05.109879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.119420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.119667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.119686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.129081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.129330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.129350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.138757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.139006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.139025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.148496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.148740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.148759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.158153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.158398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.158417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.167831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.168079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.168097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.177497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.177749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.177768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.187182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.187427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.187445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.196881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.197127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.197146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.206523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.206773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.206791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.216263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.216508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.216527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.225921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.226175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.226194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.235578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.235829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.235848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.245262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.245507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.245526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.254954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.255210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.255229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.264602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.264845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.264864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.274315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.274556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.274575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.283967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.284223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.284240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.293943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.294211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.294230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.303833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.304092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.304111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.719 [2024-07-24 20:02:05.313728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.719 [2024-07-24 20:02:05.313979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.719 [2024-07-24 20:02:05.313998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.980 [2024-07-24 20:02:05.323636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.980 [2024-07-24 20:02:05.323888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.980 [2024-07-24 20:02:05.323906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.980 [2024-07-24 20:02:05.333317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.980 [2024-07-24 20:02:05.333565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.980 [2024-07-24 20:02:05.333584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.980 [2024-07-24 20:02:05.342982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.980 [2024-07-24 20:02:05.343231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.980 [2024-07-24 20:02:05.343254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.980 [2024-07-24 20:02:05.352657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.980 [2024-07-24 20:02:05.352902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.980 [2024-07-24 20:02:05.352920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.980 [2024-07-24 20:02:05.362310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.980 [2024-07-24 20:02:05.362556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.980 [2024-07-24 20:02:05.362575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.980 [2024-07-24 20:02:05.371979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.980 [2024-07-24 20:02:05.372234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.980 [2024-07-24 20:02:05.372254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.980 [2024-07-24 20:02:05.381986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.980 [2024-07-24 20:02:05.382247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.980 [2024-07-24 20:02:05.382267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.980 [2024-07-24 20:02:05.391926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.980 [2024-07-24 20:02:05.392181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.980 [2024-07-24 20:02:05.392200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.980 [2024-07-24 20:02:05.401811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.980 [2024-07-24 20:02:05.402063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.980 [2024-07-24 20:02:05.402082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.980 [2024-07-24 20:02:05.411511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.980 [2024-07-24 20:02:05.411758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.980 [2024-07-24 20:02:05.411777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.980 [2024-07-24 20:02:05.421287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.980 [2024-07-24 20:02:05.421539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.980 [2024-07-24 20:02:05.421559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.980 [2024-07-24 20:02:05.430970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.980 [2024-07-24 20:02:05.431304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.980 [2024-07-24 20:02:05.431326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.980 [2024-07-24 20:02:05.440646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.980 [2024-07-24 20:02:05.440885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.980 [2024-07-24 20:02:05.440904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.980 [2024-07-24 20:02:05.450325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.981 [2024-07-24 20:02:05.450566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.981 [2024-07-24 20:02:05.450585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.981 [2024-07-24 20:02:05.460013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.981 [2024-07-24 20:02:05.460264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.981 [2024-07-24 20:02:05.460282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.981 [2024-07-24 20:02:05.469674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.981 [2024-07-24 20:02:05.469922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.981 [2024-07-24 20:02:05.469940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.981 [2024-07-24 20:02:05.479318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.981 [2024-07-24 20:02:05.479564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.981 [2024-07-24 20:02:05.479582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.981 [2024-07-24 20:02:05.489013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.981 [2024-07-24 20:02:05.489272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.981 [2024-07-24 20:02:05.489290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.981 [2024-07-24 20:02:05.498685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.981 [2024-07-24 20:02:05.498932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.981 [2024-07-24 20:02:05.498951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.981 [2024-07-24 20:02:05.508462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.981 [2024-07-24 20:02:05.508708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.981 [2024-07-24 20:02:05.508727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.981 [2024-07-24 20:02:05.518141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.981 [2024-07-24 20:02:05.518384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.981 [2024-07-24 20:02:05.518403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.981 [2024-07-24 20:02:05.527896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.981 [2024-07-24 20:02:05.528135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.981 [2024-07-24 20:02:05.528154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.981 [2024-07-24 20:02:05.537580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.981 [2024-07-24 20:02:05.537829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.981 [2024-07-24 20:02:05.537847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.981 [2024-07-24 20:02:05.547506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.981 [2024-07-24 20:02:05.547743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.981 [2024-07-24 20:02:05.547762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.981 [2024-07-24 20:02:05.557375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.981 [2024-07-24 20:02:05.557620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.981 [2024-07-24 20:02:05.557637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.981 [2024-07-24 20:02:05.567151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:13.981 [2024-07-24 20:02:05.567400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.981 [2024-07-24 20:02:05.567417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.242 [2024-07-24 20:02:05.577037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.242 [2024-07-24 20:02:05.577299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.242 [2024-07-24 20:02:05.577318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.242 [2024-07-24 20:02:05.586839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.242 [2024-07-24 20:02:05.587087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.242 [2024-07-24 20:02:05.587106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.242 [2024-07-24 20:02:05.596498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.242 [2024-07-24 20:02:05.596740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.242 [2024-07-24 20:02:05.596759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.242 [2024-07-24 20:02:05.606144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.242 [2024-07-24 20:02:05.606396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.242 [2024-07-24 20:02:05.606414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.242 [2024-07-24 20:02:05.615846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.242 [2024-07-24 20:02:05.616105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.242 [2024-07-24 20:02:05.616124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.242 [2024-07-24 20:02:05.625606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.242 [2024-07-24 20:02:05.625855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.242 [2024-07-24 20:02:05.625874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.242 [2024-07-24 20:02:05.635301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.242 [2024-07-24 20:02:05.635551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.242 [2024-07-24 20:02:05.635569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.242 [2024-07-24 20:02:05.644974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.242 [2024-07-24 20:02:05.645230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.242 [2024-07-24 20:02:05.645248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.242 [2024-07-24 20:02:05.654664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.242 [2024-07-24 20:02:05.654910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.242 [2024-07-24 20:02:05.654928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.242 [2024-07-24 20:02:05.664321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.242 [2024-07-24 20:02:05.664569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.242 [2024-07-24 20:02:05.664587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.242 [2024-07-24 20:02:05.673995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.242 [2024-07-24 20:02:05.674247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.242 [2024-07-24 20:02:05.674266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.242 [2024-07-24 20:02:05.683651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.242 [2024-07-24 20:02:05.683894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.242 [2024-07-24 20:02:05.683915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.242 [2024-07-24 20:02:05.693362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.242 [2024-07-24 20:02:05.693608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.242 [2024-07-24 20:02:05.693626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.242 [2024-07-24 20:02:05.702994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.242 [2024-07-24 20:02:05.703248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.242 [2024-07-24 20:02:05.703266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.242 [2024-07-24 20:02:05.712694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.242 [2024-07-24 20:02:05.712941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.242 [2024-07-24 20:02:05.712959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.243 [2024-07-24 20:02:05.722358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.243 [2024-07-24 20:02:05.722605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.243 [2024-07-24 20:02:05.722624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.243 [2024-07-24 20:02:05.732036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.243 [2024-07-24 20:02:05.732290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.243 [2024-07-24 20:02:05.732308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.243 [2024-07-24 20:02:05.741726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.243 [2024-07-24 20:02:05.741973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.243 [2024-07-24 20:02:05.741992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.243 [2024-07-24 20:02:05.751474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.243 [2024-07-24 20:02:05.751716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.243 [2024-07-24 20:02:05.751733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.243 [2024-07-24 20:02:05.761146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.243 [2024-07-24 20:02:05.761390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.243 [2024-07-24 20:02:05.761409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.243 [2024-07-24 20:02:05.770859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.243 [2024-07-24 20:02:05.771112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.243 [2024-07-24 20:02:05.771131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.243 [2024-07-24 20:02:05.780541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.243 [2024-07-24 20:02:05.780792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.243 [2024-07-24 20:02:05.780811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.243 [2024-07-24 20:02:05.790222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.243 [2024-07-24 20:02:05.790472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.243 [2024-07-24 20:02:05.790491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.243 [2024-07-24 20:02:05.800199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.243 [2024-07-24 20:02:05.800457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.243 [2024-07-24 20:02:05.800475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.243 [2024-07-24 20:02:05.810054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.243 [2024-07-24 20:02:05.810301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.243 [2024-07-24 20:02:05.810320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.243 [2024-07-24 20:02:05.819920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.243 [2024-07-24 20:02:05.820182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.243 [2024-07-24 20:02:05.820201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.243 [2024-07-24 20:02:05.829696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.243 [2024-07-24 20:02:05.829943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.243 [2024-07-24 20:02:05.829962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.504 [2024-07-24 20:02:05.839606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.504 [2024-07-24 20:02:05.839858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.504 [2024-07-24 20:02:05.839877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.504 [2024-07-24 20:02:05.849474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.504 [2024-07-24 20:02:05.849720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.504 [2024-07-24 20:02:05.849738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.504 [2024-07-24 20:02:05.859136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.504 [2024-07-24 20:02:05.859391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.504 [2024-07-24 20:02:05.859410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:05.868807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:05.869055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:05.869073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:05.878645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:05.878887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:05.878906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:05.888330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:05.888587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:05.888605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:05.898040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:05.898301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:05.898320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:05.907701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:05.907949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:05.907967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:05.917425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:05.917675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:05.917694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:05.927284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:05.927533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:05.927551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:05.936942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:05.937197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:05.937221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:05.946622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:05.946863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:05.946882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:05.956301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:05.956548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:05.956566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:05.965955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:05.966211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:05.966230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:05.975618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:05.975862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:05.975881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:05.985289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:05.985534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:05.985552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:05.994960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:05.995214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:05.995232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:06.004640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:06.004886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:06.004904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:06.014294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:06.014540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:06.014558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:06.023971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:06.024225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:06.024244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:06.033656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:06.033898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:06.033916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:06.043307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:06.043559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:06.043593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:06.053292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:06.053540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:06.053558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:06.063099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:06.063352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:06.063371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:06.072910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:06.073162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:06.073181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:06.082562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:06.082810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:06.082828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.505 [2024-07-24 20:02:06.092254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.505 [2024-07-24 20:02:06.092501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.505 [2024-07-24 20:02:06.092519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.764 [2024-07-24 20:02:06.102085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.764 [2024-07-24 20:02:06.102333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.764 [2024-07-24 20:02:06.102352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.764 [2024-07-24 20:02:06.111867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.764 [2024-07-24 20:02:06.112111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.764 [2024-07-24 20:02:06.112130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.764 [2024-07-24 20:02:06.121614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.764 [2024-07-24 20:02:06.121862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.764 [2024-07-24 20:02:06.121881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.764 [2024-07-24 20:02:06.131322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.764 [2024-07-24 20:02:06.131570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.764 [2024-07-24 20:02:06.131588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.764 [2024-07-24 20:02:06.140971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.764 [2024-07-24 20:02:06.141219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.764 [2024-07-24 20:02:06.141238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.764 [2024-07-24 20:02:06.150702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.764 [2024-07-24 20:02:06.150947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.764 [2024-07-24 20:02:06.150965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.764 [2024-07-24 20:02:06.160379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.764 [2024-07-24 20:02:06.160623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.764 [2024-07-24 20:02:06.160642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.764 [2024-07-24 20:02:06.170028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.764 [2024-07-24 20:02:06.170287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.764 [2024-07-24 20:02:06.170304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.764 [2024-07-24 20:02:06.179692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.764 [2024-07-24 20:02:06.179941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.764 [2024-07-24 20:02:06.179959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.764 [2024-07-24 20:02:06.189390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.765 [2024-07-24 20:02:06.189639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.765 [2024-07-24 20:02:06.189659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.765 [2024-07-24 20:02:06.199040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.765 [2024-07-24 20:02:06.199297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.765 [2024-07-24 20:02:06.199315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.765 [2024-07-24 20:02:06.208714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.765 [2024-07-24 20:02:06.208962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.765 [2024-07-24 20:02:06.208981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.765 [2024-07-24 20:02:06.218382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.765 [2024-07-24 20:02:06.218632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.765 [2024-07-24 20:02:06.218649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.765 [2024-07-24 20:02:06.228040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.765 [2024-07-24 20:02:06.228296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.765 [2024-07-24 20:02:06.228314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.765 [2024-07-24 20:02:06.237719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c00a0) with pdu=0x2000190eea00 00:26:14.765 [2024-07-24 20:02:06.237967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.765 [2024-07-24 20:02:06.237986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.765 00:26:14.765 Latency(us) 00:26:14.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.765 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:14.765 nvme0n1 : 2.00 25564.00 99.86 0.00 0.00 4998.62 2949.12 32824.99 00:26:14.765 =================================================================================================================== 00:26:14.765 Total : 25564.00 99.86 0.00 0.00 4998.62 2949.12 32824.99 00:26:14.765 0 00:26:14.765 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:14.765 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:14.765 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:14.765 | .driver_specific 00:26:14.765 | .nvme_error 00:26:14.765 | .status_code 00:26:14.765 | .command_transient_transport_error' 00:26:14.765 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:15.025 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 200 > 0 )) 00:26:15.025 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2195012 00:26:15.025 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2195012 ']' 00:26:15.025 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2195012 00:26:15.025 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:15.025 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:15.025 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2195012 00:26:15.025 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:15.025 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:15.025 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2195012' 00:26:15.025 killing process with pid 2195012 00:26:15.025 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2195012 00:26:15.025 Received shutdown signal, test time was about 2.000000 seconds 00:26:15.025 00:26:15.025 Latency(us) 00:26:15.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.025 =================================================================================================================== 00:26:15.025 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:15.025 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2195012 00:26:15.285 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:15.285 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:15.285 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:15.285 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:15.285 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:15.285 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2195604 00:26:15.285 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2195604 /var/tmp/bperf.sock 00:26:15.285 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:15.285 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2195604 ']' 00:26:15.285 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:15.285 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:15.285 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:15.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:15.285 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:15.285 20:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:15.285 [2024-07-24 20:02:06.726803] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:26:15.285 [2024-07-24 20:02:06.726850] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2195604 ] 00:26:15.285 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:15.285 Zero copy mechanism will not be used. 00:26:15.285 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.285 [2024-07-24 20:02:06.781415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.285 [2024-07-24 20:02:06.861675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.225 20:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:16.225 20:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:16.225 20:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:16.225 20:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:16.225 20:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:16.225 20:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.225 20:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.225 20:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.225 20:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:16.225 20:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:16.485 nvme0n1 00:26:16.485 20:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:16.485 20:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.485 20:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.485 20:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.485 20:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:16.485 20:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:16.485 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:16.485 Zero copy mechanism will not be used. 00:26:16.485 Running I/O for 2 seconds... 00:26:16.745 [2024-07-24 20:02:08.114698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:16.745 [2024-07-24 20:02:08.115311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.745 [2024-07-24 20:02:08.115340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.745 [2024-07-24 20:02:08.133877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:16.745 [2024-07-24 20:02:08.134398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.745 [2024-07-24 20:02:08.134423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.745 [2024-07-24 20:02:08.152292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:16.745 [2024-07-24 20:02:08.152845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.745 [2024-07-24 20:02:08.152868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.745 [2024-07-24 20:02:08.170621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:16.745 [2024-07-24 20:02:08.171162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.745 [2024-07-24 20:02:08.171182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.745 [2024-07-24 20:02:08.188980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:16.745 [2024-07-24 20:02:08.189563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.745 [2024-07-24 20:02:08.189584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.745 [2024-07-24 20:02:08.206788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:16.745 [2024-07-24 20:02:08.207567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.745 [2024-07-24 20:02:08.207587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.745 [2024-07-24 20:02:08.226763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:16.745 [2024-07-24 20:02:08.227606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.745 [2024-07-24 20:02:08.227626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.745 [2024-07-24 20:02:08.248562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:16.745 [2024-07-24 20:02:08.249203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.745 [2024-07-24 20:02:08.249223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.745 [2024-07-24 20:02:08.267958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:16.745 [2024-07-24 20:02:08.268509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.745 [2024-07-24 20:02:08.268529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.745 [2024-07-24 20:02:08.285999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:16.745 [2024-07-24 20:02:08.286599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.745 [2024-07-24 20:02:08.286619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.745 [2024-07-24 20:02:08.306183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:16.745 [2024-07-24 20:02:08.306820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.745 [2024-07-24 20:02:08.306839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.745 [2024-07-24 20:02:08.327240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:16.745 [2024-07-24 20:02:08.327903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.745 [2024-07-24 20:02:08.327927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.005 [2024-07-24 20:02:08.345848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.006 [2024-07-24 20:02:08.346447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.006 [2024-07-24 20:02:08.346467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.006 [2024-07-24 20:02:08.363683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.006 [2024-07-24 20:02:08.364208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.006 [2024-07-24 20:02:08.364228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.006 [2024-07-24 20:02:08.382195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.006 [2024-07-24 20:02:08.382958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.006 [2024-07-24 20:02:08.382978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.006 [2024-07-24 20:02:08.401877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.006 [2024-07-24 20:02:08.402640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.006 [2024-07-24 20:02:08.402660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.006 [2024-07-24 20:02:08.423144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.006 [2024-07-24 20:02:08.423666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.006 [2024-07-24 20:02:08.423685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.006 [2024-07-24 20:02:08.441933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.006 [2024-07-24 20:02:08.442477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.006 [2024-07-24 20:02:08.442497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.006 [2024-07-24 20:02:08.460325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.006 [2024-07-24 20:02:08.460919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.006 [2024-07-24 20:02:08.460938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.006 [2024-07-24 20:02:08.478342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.006 [2024-07-24 20:02:08.479006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.006 [2024-07-24 20:02:08.479025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.006 [2024-07-24 20:02:08.496376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.006 [2024-07-24 20:02:08.496944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.006 [2024-07-24 20:02:08.496963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.006 [2024-07-24 20:02:08.516987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.006 [2024-07-24 20:02:08.517659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.006 [2024-07-24 20:02:08.517679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.006 [2024-07-24 20:02:08.537334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.006 [2024-07-24 20:02:08.538253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.006 [2024-07-24 20:02:08.538272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.006 [2024-07-24 20:02:08.558340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.006 [2024-07-24 20:02:08.558643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.006 [2024-07-24 20:02:08.558662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.006 [2024-07-24 20:02:08.579099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.006 [2024-07-24 20:02:08.579803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.006 [2024-07-24 20:02:08.579822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.006 [2024-07-24 20:02:08.601280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.006 [2024-07-24 20:02:08.601872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.006 [2024-07-24 20:02:08.601892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.267 [2024-07-24 20:02:08.621854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.267 [2024-07-24 20:02:08.622546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.267 [2024-07-24 20:02:08.622566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.267 [2024-07-24 20:02:08.640844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.267 [2024-07-24 20:02:08.641142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.267 [2024-07-24 20:02:08.641163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.267 [2024-07-24 20:02:08.659303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.267 [2024-07-24 20:02:08.659600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.267 [2024-07-24 20:02:08.659620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.267 [2024-07-24 20:02:08.679358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.267 [2024-07-24 20:02:08.679869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.267 [2024-07-24 20:02:08.679888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.267 [2024-07-24 20:02:08.698971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.267 [2024-07-24 20:02:08.699634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.267 [2024-07-24 20:02:08.699655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.267 [2024-07-24 20:02:08.720401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.268 [2024-07-24 20:02:08.720860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.268 [2024-07-24 20:02:08.720879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.268 [2024-07-24 20:02:08.738439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.268 [2024-07-24 20:02:08.738930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.268 [2024-07-24 20:02:08.738949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.268 [2024-07-24 20:02:08.757056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.268 [2024-07-24 20:02:08.757653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.268 [2024-07-24 20:02:08.757672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.268 [2024-07-24 20:02:08.776860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.268 [2024-07-24 20:02:08.777262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.268 [2024-07-24 20:02:08.777281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.268 [2024-07-24 20:02:08.798215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.268 [2024-07-24 20:02:08.798826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.268 [2024-07-24 20:02:08.798845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.268 [2024-07-24 20:02:08.818199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.268 [2024-07-24 20:02:08.818712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.268 [2024-07-24 20:02:08.818731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.268 [2024-07-24 20:02:08.847857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.268 [2024-07-24 20:02:08.848473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.268 [2024-07-24 20:02:08.848508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.528 [2024-07-24 20:02:08.876357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.528 [2024-07-24 20:02:08.876947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.528 [2024-07-24 20:02:08.876967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.528 [2024-07-24 20:02:08.898430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.528 [2024-07-24 20:02:08.899102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.528 [2024-07-24 20:02:08.899122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.528 [2024-07-24 20:02:08.928567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.528 [2024-07-24 20:02:08.929472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.528 [2024-07-24 20:02:08.929498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.528 [2024-07-24 20:02:08.951218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.528 [2024-07-24 20:02:08.952054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.528 [2024-07-24 20:02:08.952074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.528 [2024-07-24 20:02:08.973775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.528 [2024-07-24 20:02:08.974668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.528 [2024-07-24 20:02:08.974688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.528 [2024-07-24 20:02:08.994850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.528 [2024-07-24 20:02:08.995470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.528 [2024-07-24 20:02:08.995490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.528 [2024-07-24 20:02:09.015105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.528 [2024-07-24 20:02:09.015760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.528 [2024-07-24 20:02:09.015779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.528 [2024-07-24 20:02:09.034390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.528 [2024-07-24 20:02:09.035074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.528 [2024-07-24 20:02:09.035094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.528 [2024-07-24 20:02:09.055459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.528 [2024-07-24 20:02:09.056287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.528 [2024-07-24 20:02:09.056312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.529 [2024-07-24 20:02:09.076997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.529 [2024-07-24 20:02:09.077671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.529 [2024-07-24 20:02:09.077690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.529 [2024-07-24 20:02:09.096854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.529 [2024-07-24 20:02:09.097680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.529 [2024-07-24 20:02:09.097700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.529 [2024-07-24 20:02:09.117416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.529 [2024-07-24 20:02:09.118022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.529 [2024-07-24 20:02:09.118041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.789 [2024-07-24 20:02:09.137041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.789 [2024-07-24 20:02:09.137644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-24 20:02:09.137664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.789 [2024-07-24 20:02:09.158279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.789 [2024-07-24 20:02:09.158923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-24 20:02:09.158943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.789 [2024-07-24 20:02:09.179716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.789 [2024-07-24 20:02:09.180355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-24 20:02:09.180375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.789 [2024-07-24 20:02:09.200612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.789 [2024-07-24 20:02:09.201309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-24 20:02:09.201329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.789 [2024-07-24 20:02:09.221008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.789 [2024-07-24 20:02:09.221792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-24 20:02:09.221812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.789 [2024-07-24 20:02:09.243568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.789 [2024-07-24 20:02:09.244019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-24 20:02:09.244038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.789 [2024-07-24 20:02:09.265050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.789 [2024-07-24 20:02:09.265839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-24 20:02:09.265858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.789 [2024-07-24 20:02:09.285007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.789 [2024-07-24 20:02:09.285556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-24 20:02:09.285575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.789 [2024-07-24 20:02:09.306191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.789 [2024-07-24 20:02:09.306562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-24 20:02:09.306580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.789 [2024-07-24 20:02:09.326427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.789 [2024-07-24 20:02:09.326893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-24 20:02:09.326913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.789 [2024-07-24 20:02:09.347744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.789 [2024-07-24 20:02:09.348464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-24 20:02:09.348485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.789 [2024-07-24 20:02:09.368859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:17.789 [2024-07-24 20:02:09.369473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-24 20:02:09.369493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.049 [2024-07-24 20:02:09.389582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.049 [2024-07-24 20:02:09.390146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.049 [2024-07-24 20:02:09.390166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.049 [2024-07-24 20:02:09.408576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.049 [2024-07-24 20:02:09.409221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.049 [2024-07-24 20:02:09.409246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.049 [2024-07-24 20:02:09.428827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.049 [2024-07-24 20:02:09.429711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.049 [2024-07-24 20:02:09.429732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.049 [2024-07-24 20:02:09.449082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.049 [2024-07-24 20:02:09.449636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.049 [2024-07-24 20:02:09.449666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.049 [2024-07-24 20:02:09.469719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.049 [2024-07-24 20:02:09.470395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.049 [2024-07-24 20:02:09.470414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.049 [2024-07-24 20:02:09.490263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.049 [2024-07-24 20:02:09.490675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.049 [2024-07-24 20:02:09.490693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.049 [2024-07-24 20:02:09.511393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.049 [2024-07-24 20:02:09.511891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.049 [2024-07-24 20:02:09.511910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.049 [2024-07-24 20:02:09.532934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.049 [2024-07-24 20:02:09.533399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.049 [2024-07-24 20:02:09.533419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.049 [2024-07-24 20:02:09.553612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.049 [2024-07-24 20:02:09.554398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.049 [2024-07-24 20:02:09.554417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.049 [2024-07-24 20:02:09.575200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.049 [2024-07-24 20:02:09.575598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.049 [2024-07-24 20:02:09.575617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.049 [2024-07-24 20:02:09.594987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.049 [2024-07-24 20:02:09.595594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.050 [2024-07-24 20:02:09.595614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.050 [2024-07-24 20:02:09.613734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.050 [2024-07-24 20:02:09.614435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.050 [2024-07-24 20:02:09.614455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.050 [2024-07-24 20:02:09.633374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.050 [2024-07-24 20:02:09.634062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.050 [2024-07-24 20:02:09.634082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.310 [2024-07-24 20:02:09.653260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.310 [2024-07-24 20:02:09.653876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.310 [2024-07-24 20:02:09.653896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.310 [2024-07-24 20:02:09.673377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.310 [2024-07-24 20:02:09.674091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.310 [2024-07-24 20:02:09.674111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.310 [2024-07-24 20:02:09.695389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.310 [2024-07-24 20:02:09.696030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.310 [2024-07-24 20:02:09.696056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.310 [2024-07-24 20:02:09.716988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.310 [2024-07-24 20:02:09.717586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.310 [2024-07-24 20:02:09.717605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.310 [2024-07-24 20:02:09.736996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.310 [2024-07-24 20:02:09.737450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.310 [2024-07-24 20:02:09.737470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.310 [2024-07-24 20:02:09.758050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.310 [2024-07-24 20:02:09.758700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.310 [2024-07-24 20:02:09.758720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.310 [2024-07-24 20:02:09.779057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.310 [2024-07-24 20:02:09.779662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.310 [2024-07-24 20:02:09.779682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.310 [2024-07-24 20:02:09.799699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.310 [2024-07-24 20:02:09.799921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.310 [2024-07-24 20:02:09.799939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.310 [2024-07-24 20:02:09.819714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.310 [2024-07-24 20:02:09.820172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.310 [2024-07-24 20:02:09.820193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.310 [2024-07-24 20:02:09.839703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.310 [2024-07-24 20:02:09.840387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.310 [2024-07-24 20:02:09.840406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.310 [2024-07-24 20:02:09.859874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.310 [2024-07-24 20:02:09.860778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.310 [2024-07-24 20:02:09.860797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.310 [2024-07-24 20:02:09.880966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.310 [2024-07-24 20:02:09.881575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.310 [2024-07-24 20:02:09.881595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.310 [2024-07-24 20:02:09.902624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.310 [2024-07-24 20:02:09.903091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.310 [2024-07-24 20:02:09.903111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.570 [2024-07-24 20:02:09.923805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.570 [2024-07-24 20:02:09.924511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.570 [2024-07-24 20:02:09.924532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.570 [2024-07-24 20:02:09.943179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.570 [2024-07-24 20:02:09.943769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.570 [2024-07-24 20:02:09.943792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.570 [2024-07-24 20:02:09.963911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.570 [2024-07-24 20:02:09.964427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.570 [2024-07-24 20:02:09.964448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.570 [2024-07-24 20:02:09.984514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.570 [2024-07-24 20:02:09.985137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.570 [2024-07-24 20:02:09.985157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.570 [2024-07-24 20:02:10.003374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.570 [2024-07-24 20:02:10.003995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.570 [2024-07-24 20:02:10.004015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.570 [2024-07-24 20:02:10.023954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.570 [2024-07-24 20:02:10.024468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.570 [2024-07-24 20:02:10.024492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.571 [2024-07-24 20:02:10.042794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.571 [2024-07-24 20:02:10.043299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.571 [2024-07-24 20:02:10.043320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.571 [2024-07-24 20:02:10.062742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.571 [2024-07-24 20:02:10.063419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.571 [2024-07-24 20:02:10.063439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.571 [2024-07-24 20:02:10.082297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23c0240) with pdu=0x2000190fef90 00:26:18.571 [2024-07-24 20:02:10.082569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.571 [2024-07-24 20:02:10.082590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.571 00:26:18.571 Latency(us) 00:26:18.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.571 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:18.571 nvme0n1 : 2.01 1497.15 187.14 0.00 0.00 10657.96 7465.41 34420.65 00:26:18.571 =================================================================================================================== 00:26:18.571 Total : 1497.15 187.14 0.00 0.00 10657.96 7465.41 34420.65 00:26:18.571 0 00:26:18.571 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:18.571 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:18.571 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:18.571 | .driver_specific 00:26:18.571 | .nvme_error 00:26:18.571 | .status_code 00:26:18.571 | .command_transient_transport_error' 00:26:18.571 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:18.830 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 97 > 0 )) 00:26:18.830 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2195604 00:26:18.831 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2195604 ']' 00:26:18.831 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2195604 00:26:18.831 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:18.831 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:18.831 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2195604 00:26:18.831 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:18.831 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:18.831 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2195604' 00:26:18.831 killing process with pid 2195604 00:26:18.831 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2195604 00:26:18.831 Received shutdown signal, test time was about 2.000000 seconds 00:26:18.831 00:26:18.831 Latency(us) 00:26:18.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.831 =================================================================================================================== 00:26:18.831 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:18.831 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2195604 00:26:19.090 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2193586 00:26:19.090 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2193586 ']' 00:26:19.090 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2193586 00:26:19.090 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:19.090 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:19.090 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2193586 00:26:19.090 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:19.090 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:19.090 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2193586' 00:26:19.090 killing process with pid 2193586 00:26:19.090 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2193586 00:26:19.090 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2193586 00:26:19.350 00:26:19.350 real 0m16.701s 00:26:19.350 user 0m32.970s 00:26:19.350 sys 0m3.562s 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:19.350 ************************************ 00:26:19.350 END TEST nvmf_digest_error 00:26:19.350 ************************************ 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:19.350 rmmod nvme_tcp 00:26:19.350 rmmod nvme_fabrics 00:26:19.350 rmmod nvme_keyring 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2193586 ']' 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2193586 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2193586 ']' 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2193586 00:26:19.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2193586) - No such process 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2193586 is not found' 00:26:19.350 Process with pid 2193586 is not found 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.350 20:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.891 20:02:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:21.891 00:26:21.891 real 0m41.179s 00:26:21.891 user 1m7.816s 00:26:21.891 sys 0m11.077s 00:26:21.892 20:02:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:21.892 20:02:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:21.892 ************************************ 00:26:21.892 END TEST nvmf_digest 00:26:21.892 ************************************ 00:26:21.892 20:02:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:21.892 20:02:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:21.892 20:02:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:21.892 20:02:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:21.892 20:02:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:21.892 20:02:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:21.892 20:02:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.892 ************************************ 00:26:21.892 START TEST nvmf_bdevperf 00:26:21.892 ************************************ 00:26:21.892 20:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:21.892 * Looking for test storage... 00:26:21.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.892 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:21.893 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:21.893 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:21.893 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.893 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.893 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.893 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:21.893 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:21.893 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:21.893 20:02:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.174 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:27.174 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:27.174 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:27.174 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:27.174 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:27.174 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:27.174 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:27.174 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:27.174 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:27.174 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:27.174 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:27.174 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:27.174 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:27.175 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:27.175 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:27.175 Found net devices under 0000:86:00.0: cvl_0_0 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:27.175 Found net devices under 0000:86:00.1: cvl_0_1 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:27.175 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:27.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:26:27.469 00:26:27.469 --- 10.0.0.2 ping statistics --- 00:26:27.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.469 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:26:27.469 00:26:27.469 --- 10.0.0.1 ping statistics --- 00:26:27.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.469 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2199713 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2199713 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2199713 ']' 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:27.469 20:02:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.469 [2024-07-24 20:02:18.919019] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:26:27.469 [2024-07-24 20:02:18.919067] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.469 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.469 [2024-07-24 20:02:18.974671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:27.469 [2024-07-24 20:02:19.055499] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.469 [2024-07-24 20:02:19.055537] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.469 [2024-07-24 20:02:19.055545] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.469 [2024-07-24 20:02:19.055553] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.469 [2024-07-24 20:02:19.055558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.469 [2024-07-24 20:02:19.055598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.469 [2024-07-24 20:02:19.055685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:27.469 [2024-07-24 20:02:19.055685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.410 [2024-07-24 20:02:19.768515] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.410 Malloc0 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.410 [2024-07-24 20:02:19.832919] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:28.410 { 00:26:28.410 "params": { 00:26:28.410 "name": "Nvme$subsystem", 00:26:28.410 "trtype": "$TEST_TRANSPORT", 00:26:28.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.410 "adrfam": "ipv4", 00:26:28.410 "trsvcid": "$NVMF_PORT", 00:26:28.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.410 "hdgst": ${hdgst:-false}, 00:26:28.410 "ddgst": ${ddgst:-false} 00:26:28.410 }, 00:26:28.410 "method": "bdev_nvme_attach_controller" 00:26:28.410 } 00:26:28.410 EOF 00:26:28.410 )") 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:28.410 20:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:28.410 "params": { 00:26:28.410 "name": "Nvme1", 00:26:28.410 "trtype": "tcp", 00:26:28.410 "traddr": "10.0.0.2", 00:26:28.410 "adrfam": "ipv4", 00:26:28.410 "trsvcid": "4420", 00:26:28.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:28.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:28.410 "hdgst": false, 00:26:28.410 "ddgst": false 00:26:28.410 }, 00:26:28.410 "method": "bdev_nvme_attach_controller" 00:26:28.410 }' 00:26:28.410 [2024-07-24 20:02:19.883174] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:26:28.410 [2024-07-24 20:02:19.883216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2199962 ] 00:26:28.410 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.410 [2024-07-24 20:02:19.937426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.670 [2024-07-24 20:02:20.012379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.930 Running I/O for 1 seconds... 00:26:29.868 00:26:29.868 Latency(us) 00:26:29.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.868 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:29.868 Verification LBA range: start 0x0 length 0x4000 00:26:29.868 Nvme1n1 : 1.00 11037.07 43.11 0.00 0.00 11554.80 2350.75 29861.62 00:26:29.868 =================================================================================================================== 00:26:29.868 Total : 11037.07 43.11 0.00 0.00 11554.80 2350.75 29861.62 00:26:30.128 20:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2200198 00:26:30.128 20:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:30.129 20:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:30.129 20:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:30.129 20:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:30.129 20:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:30.129 20:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.129 20:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.129 { 00:26:30.129 "params": { 00:26:30.129 "name": "Nvme$subsystem", 00:26:30.129 "trtype": "$TEST_TRANSPORT", 00:26:30.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.129 "adrfam": "ipv4", 00:26:30.129 "trsvcid": "$NVMF_PORT", 00:26:30.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.129 "hdgst": ${hdgst:-false}, 00:26:30.129 "ddgst": ${ddgst:-false} 00:26:30.129 }, 00:26:30.129 "method": "bdev_nvme_attach_controller" 00:26:30.129 } 00:26:30.129 EOF 00:26:30.129 )") 00:26:30.129 20:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:30.129 20:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:30.129 20:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:30.129 20:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:30.129 "params": { 00:26:30.129 "name": "Nvme1", 00:26:30.129 "trtype": "tcp", 00:26:30.129 "traddr": "10.0.0.2", 00:26:30.129 "adrfam": "ipv4", 00:26:30.129 "trsvcid": "4420", 00:26:30.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:30.129 "hdgst": false, 00:26:30.129 "ddgst": false 00:26:30.129 }, 00:26:30.129 "method": "bdev_nvme_attach_controller" 00:26:30.129 }' 00:26:30.129 [2024-07-24 20:02:21.538940] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:26:30.129 [2024-07-24 20:02:21.538987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2200198 ] 00:26:30.129 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.129 [2024-07-24 20:02:21.594208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.129 [2024-07-24 20:02:21.663977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.389 Running I/O for 15 seconds... 00:26:32.926 20:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2199713 00:26:32.926 20:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:32.926 [2024-07-24 20:02:24.518434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.518976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.926 [2024-07-24 20:02:24.518988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.926 [2024-07-24 20:02:24.519001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.927 [2024-07-24 20:02:24.519023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.927 [2024-07-24 20:02:24.519048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.927 [2024-07-24 20:02:24.519067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.927 [2024-07-24 20:02:24.519086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.927 [2024-07-24 20:02:24.519106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.927 [2024-07-24 20:02:24.519129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.927 [2024-07-24 20:02:24.519152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.927 [2024-07-24 20:02:24.519173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.927 [2024-07-24 20:02:24.519189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.927 [2024-07-24 20:02:24.519205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.927 [2024-07-24 20:02:24.519220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.927 [2024-07-24 20:02:24.519234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.927 [2024-07-24 20:02:24.519249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.927 [2024-07-24 20:02:24.519264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.927 [2024-07-24 20:02:24.519279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.927 [2024-07-24 20:02:24.519293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.927 [2024-07-24 20:02:24.519307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.927 [2024-07-24 20:02:24.519323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.927 [2024-07-24 20:02:24.519337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.927 [2024-07-24 20:02:24.519353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.927 [2024-07-24 20:02:24.519368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.927 [2024-07-24 20:02:24.519383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.927 [2024-07-24 20:02:24.519397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.927 [2024-07-24 20:02:24.519412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.927 [2024-07-24 20:02:24.519427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.927 [2024-07-24 20:02:24.519443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.927 [2024-07-24 20:02:24.519458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.927 [2024-07-24 20:02:24.519472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.927 [2024-07-24 20:02:24.519486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.927 [2024-07-24 20:02:24.519494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.927 [2024-07-24 20:02:24.519501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.928 [2024-07-24 20:02:24.519829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.928 [2024-07-24 20:02:24.519844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.928 [2024-07-24 20:02:24.519859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.928 [2024-07-24 20:02:24.519875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.928 [2024-07-24 20:02:24.519889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.928 [2024-07-24 20:02:24.519904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.928 [2024-07-24 20:02:24.519920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.928 [2024-07-24 20:02:24.519936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.928 [2024-07-24 20:02:24.519944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.519951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.519959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.519965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.519974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.519981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.519989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.519996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-24 20:02:24.520397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.929 [2024-07-24 20:02:24.520405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.930 [2024-07-24 20:02:24.520412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.930 [2024-07-24 20:02:24.520421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.930 [2024-07-24 20:02:24.520428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.930 [2024-07-24 20:02:24.520436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.930 [2024-07-24 20:02:24.520443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.930 [2024-07-24 20:02:24.520455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.930 [2024-07-24 20:02:24.520462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.930 [2024-07-24 20:02:24.520470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.930 [2024-07-24 20:02:24.520477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.930 [2024-07-24 20:02:24.520486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.930 [2024-07-24 20:02:24.520492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.930 [2024-07-24 20:02:24.520502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.930 [2024-07-24 20:02:24.520509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.930 [2024-07-24 20:02:24.520517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.930 [2024-07-24 20:02:24.520524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.930 [2024-07-24 20:02:24.520533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.930 [2024-07-24 20:02:24.520540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.930 [2024-07-24 20:02:24.520548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.930 [2024-07-24 20:02:24.520554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.930 [2024-07-24 20:02:24.520562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46ee0 is same with the state(5) to be set 00:26:32.930 [2024-07-24 20:02:24.520570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:32.930 [2024-07-24 20:02:24.520576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:32.930 [2024-07-24 20:02:24.520581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106384 len:8 PRP1 0x0 PRP2 0x0 00:26:32.930 [2024-07-24 20:02:24.520589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.930 [2024-07-24 20:02:24.520631] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc46ee0 was disconnected and freed. reset controller. 00:26:33.191 [2024-07-24 20:02:24.523485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.191 [2024-07-24 20:02:24.523540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.191 [2024-07-24 20:02:24.524417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.191 [2024-07-24 20:02:24.524462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.191 [2024-07-24 20:02:24.524486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.191 [2024-07-24 20:02:24.524841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.191 [2024-07-24 20:02:24.525016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.191 [2024-07-24 20:02:24.525025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.191 [2024-07-24 20:02:24.525032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.191 [2024-07-24 20:02:24.527780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.191 [2024-07-24 20:02:24.536791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.191 [2024-07-24 20:02:24.537425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.191 [2024-07-24 20:02:24.537471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.191 [2024-07-24 20:02:24.537494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.191 [2024-07-24 20:02:24.537972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.191 [2024-07-24 20:02:24.538162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.191 [2024-07-24 20:02:24.538172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.191 [2024-07-24 20:02:24.538179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.191 [2024-07-24 20:02:24.540898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.191 [2024-07-24 20:02:24.549809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.191 [2024-07-24 20:02:24.550491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.191 [2024-07-24 20:02:24.550536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.191 [2024-07-24 20:02:24.550558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.191 [2024-07-24 20:02:24.551009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.191 [2024-07-24 20:02:24.551202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.191 [2024-07-24 20:02:24.551212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.191 [2024-07-24 20:02:24.551218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.191 [2024-07-24 20:02:24.553929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.191 [2024-07-24 20:02:24.562732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.191 [2024-07-24 20:02:24.563459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.191 [2024-07-24 20:02:24.563503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.191 [2024-07-24 20:02:24.563524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.191 [2024-07-24 20:02:24.563852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.191 [2024-07-24 20:02:24.564015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.191 [2024-07-24 20:02:24.564024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.191 [2024-07-24 20:02:24.564031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.191 [2024-07-24 20:02:24.566761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.191 [2024-07-24 20:02:24.575558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.191 [2024-07-24 20:02:24.576210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.191 [2024-07-24 20:02:24.576253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.191 [2024-07-24 20:02:24.576275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.191 [2024-07-24 20:02:24.576853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.191 [2024-07-24 20:02:24.577064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.191 [2024-07-24 20:02:24.577090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.191 [2024-07-24 20:02:24.577102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.191 [2024-07-24 20:02:24.579766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.191 [2024-07-24 20:02:24.588457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.191 [2024-07-24 20:02:24.589132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.191 [2024-07-24 20:02:24.589174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.191 [2024-07-24 20:02:24.589196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.191 [2024-07-24 20:02:24.589637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.191 [2024-07-24 20:02:24.589890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.192 [2024-07-24 20:02:24.589903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.192 [2024-07-24 20:02:24.589912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.192 [2024-07-24 20:02:24.593967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.192 [2024-07-24 20:02:24.601784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.192 [2024-07-24 20:02:24.602441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.192 [2024-07-24 20:02:24.602485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.192 [2024-07-24 20:02:24.602507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.192 [2024-07-24 20:02:24.602949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.192 [2024-07-24 20:02:24.603140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.192 [2024-07-24 20:02:24.603149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.192 [2024-07-24 20:02:24.603156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.192 [2024-07-24 20:02:24.605858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.192 [2024-07-24 20:02:24.614662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.192 [2024-07-24 20:02:24.615345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.192 [2024-07-24 20:02:24.615387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.192 [2024-07-24 20:02:24.615408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.192 [2024-07-24 20:02:24.615985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.192 [2024-07-24 20:02:24.616303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.192 [2024-07-24 20:02:24.616314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.192 [2024-07-24 20:02:24.616321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.192 [2024-07-24 20:02:24.618967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.192 [2024-07-24 20:02:24.627463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.192 [2024-07-24 20:02:24.628127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.192 [2024-07-24 20:02:24.628148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.192 [2024-07-24 20:02:24.628155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.192 [2024-07-24 20:02:24.628316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.192 [2024-07-24 20:02:24.628478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.192 [2024-07-24 20:02:24.628486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.192 [2024-07-24 20:02:24.628493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.192 [2024-07-24 20:02:24.631172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.192 [2024-07-24 20:02:24.640395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.192 [2024-07-24 20:02:24.641074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.192 [2024-07-24 20:02:24.641118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.192 [2024-07-24 20:02:24.641141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.192 [2024-07-24 20:02:24.641464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.192 [2024-07-24 20:02:24.641629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.192 [2024-07-24 20:02:24.641638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.192 [2024-07-24 20:02:24.641644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.192 [2024-07-24 20:02:24.644252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.192 [2024-07-24 20:02:24.653253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.192 [2024-07-24 20:02:24.653851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.192 [2024-07-24 20:02:24.653868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.192 [2024-07-24 20:02:24.653874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.192 [2024-07-24 20:02:24.654036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.192 [2024-07-24 20:02:24.654204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.192 [2024-07-24 20:02:24.654213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.192 [2024-07-24 20:02:24.654219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.192 [2024-07-24 20:02:24.656909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.192 [2024-07-24 20:02:24.666129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.192 [2024-07-24 20:02:24.666799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.192 [2024-07-24 20:02:24.666842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.192 [2024-07-24 20:02:24.666864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.192 [2024-07-24 20:02:24.667141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.192 [2024-07-24 20:02:24.667318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.192 [2024-07-24 20:02:24.667328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.192 [2024-07-24 20:02:24.667336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.192 [2024-07-24 20:02:24.669983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.192 [2024-07-24 20:02:24.678937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.192 [2024-07-24 20:02:24.679610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.192 [2024-07-24 20:02:24.679653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.192 [2024-07-24 20:02:24.679675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.192 [2024-07-24 20:02:24.680265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.192 [2024-07-24 20:02:24.680744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.192 [2024-07-24 20:02:24.680753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.192 [2024-07-24 20:02:24.680760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.192 [2024-07-24 20:02:24.684800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.192 [2024-07-24 20:02:24.692509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.192 [2024-07-24 20:02:24.693185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.192 [2024-07-24 20:02:24.693228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.192 [2024-07-24 20:02:24.693248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.192 [2024-07-24 20:02:24.693450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.192 [2024-07-24 20:02:24.693617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.192 [2024-07-24 20:02:24.693626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.192 [2024-07-24 20:02:24.693633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.192 [2024-07-24 20:02:24.696358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.192 [2024-07-24 20:02:24.705346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.192 [2024-07-24 20:02:24.706017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.192 [2024-07-24 20:02:24.706073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.192 [2024-07-24 20:02:24.706096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.193 [2024-07-24 20:02:24.706543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.193 [2024-07-24 20:02:24.706716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-24 20:02:24.706726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.193 [2024-07-24 20:02:24.706732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.193 [2024-07-24 20:02:24.709371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.193 [2024-07-24 20:02:24.718152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.193 [2024-07-24 20:02:24.718868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.193 [2024-07-24 20:02:24.718910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.193 [2024-07-24 20:02:24.718931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.193 [2024-07-24 20:02:24.719255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.193 [2024-07-24 20:02:24.719433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-24 20:02:24.719443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.193 [2024-07-24 20:02:24.719449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.193 [2024-07-24 20:02:24.722036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.193 [2024-07-24 20:02:24.731112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.193 [2024-07-24 20:02:24.731691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.193 [2024-07-24 20:02:24.731733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.193 [2024-07-24 20:02:24.731753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.193 [2024-07-24 20:02:24.732345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.193 [2024-07-24 20:02:24.732705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-24 20:02:24.732714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.193 [2024-07-24 20:02:24.732720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.193 [2024-07-24 20:02:24.735308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.193 [2024-07-24 20:02:24.744062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.193 [2024-07-24 20:02:24.744620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.193 [2024-07-24 20:02:24.744662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.193 [2024-07-24 20:02:24.744683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.193 [2024-07-24 20:02:24.745207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.193 [2024-07-24 20:02:24.745381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-24 20:02:24.745392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.193 [2024-07-24 20:02:24.745399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.193 [2024-07-24 20:02:24.748041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.193 [2024-07-24 20:02:24.756986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.193 [2024-07-24 20:02:24.757662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.193 [2024-07-24 20:02:24.757705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.193 [2024-07-24 20:02:24.757734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.193 [2024-07-24 20:02:24.758324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.193 [2024-07-24 20:02:24.758607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-24 20:02:24.758616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.193 [2024-07-24 20:02:24.758623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.193 [2024-07-24 20:02:24.761358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.193 [2024-07-24 20:02:24.769975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.193 [2024-07-24 20:02:24.770661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.193 [2024-07-24 20:02:24.770678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.193 [2024-07-24 20:02:24.770685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.193 [2024-07-24 20:02:24.770856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.193 [2024-07-24 20:02:24.771027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-24 20:02:24.771036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.193 [2024-07-24 20:02:24.771048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.193 [2024-07-24 20:02:24.773881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.193 [2024-07-24 20:02:24.783040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.193 [2024-07-24 20:02:24.783667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.193 [2024-07-24 20:02:24.783684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.193 [2024-07-24 20:02:24.783691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.193 [2024-07-24 20:02:24.783868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.193 [2024-07-24 20:02:24.784050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.193 [2024-07-24 20:02:24.784060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.193 [2024-07-24 20:02:24.784067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.455 [2024-07-24 20:02:24.786890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.455 [2024-07-24 20:02:24.796077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.455 [2024-07-24 20:02:24.796766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.455 [2024-07-24 20:02:24.796782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.455 [2024-07-24 20:02:24.796789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.455 [2024-07-24 20:02:24.796960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.455 [2024-07-24 20:02:24.797140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.456 [2024-07-24 20:02:24.797153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.456 [2024-07-24 20:02:24.797160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.456 [2024-07-24 20:02:24.799893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.456 [2024-07-24 20:02:24.809135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.456 [2024-07-24 20:02:24.809758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.456 [2024-07-24 20:02:24.809801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.456 [2024-07-24 20:02:24.809822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.456 [2024-07-24 20:02:24.810415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.456 [2024-07-24 20:02:24.810747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.456 [2024-07-24 20:02:24.810756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.456 [2024-07-24 20:02:24.810762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.456 [2024-07-24 20:02:24.813389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.456 [2024-07-24 20:02:24.822034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.456 [2024-07-24 20:02:24.822636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.456 [2024-07-24 20:02:24.822651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.456 [2024-07-24 20:02:24.822658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.456 [2024-07-24 20:02:24.822820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.456 [2024-07-24 20:02:24.822983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.456 [2024-07-24 20:02:24.822992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.456 [2024-07-24 20:02:24.822998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.456 [2024-07-24 20:02:24.825685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.456 [2024-07-24 20:02:24.834956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.456 [2024-07-24 20:02:24.835649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.456 [2024-07-24 20:02:24.835691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.456 [2024-07-24 20:02:24.835714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.456 [2024-07-24 20:02:24.836306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.456 [2024-07-24 20:02:24.836771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.456 [2024-07-24 20:02:24.836780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.456 [2024-07-24 20:02:24.836786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.456 [2024-07-24 20:02:24.839473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.456 [2024-07-24 20:02:24.847933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.456 [2024-07-24 20:02:24.848592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.456 [2024-07-24 20:02:24.848637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.456 [2024-07-24 20:02:24.848660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.456 [2024-07-24 20:02:24.849031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.456 [2024-07-24 20:02:24.849205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.456 [2024-07-24 20:02:24.849215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.456 [2024-07-24 20:02:24.849222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.456 [2024-07-24 20:02:24.851907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.456 [2024-07-24 20:02:24.860960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.456 [2024-07-24 20:02:24.861642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.456 [2024-07-24 20:02:24.861685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.456 [2024-07-24 20:02:24.861708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.456 [2024-07-24 20:02:24.862219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.456 [2024-07-24 20:02:24.862393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.456 [2024-07-24 20:02:24.862403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.456 [2024-07-24 20:02:24.862409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.456 [2024-07-24 20:02:24.865134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.456 [2024-07-24 20:02:24.873793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.456 [2024-07-24 20:02:24.874472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.456 [2024-07-24 20:02:24.874516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.456 [2024-07-24 20:02:24.874539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.456 [2024-07-24 20:02:24.874903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.456 [2024-07-24 20:02:24.875081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.456 [2024-07-24 20:02:24.875091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.456 [2024-07-24 20:02:24.875099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.456 [2024-07-24 20:02:24.877708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.456 [2024-07-24 20:02:24.886639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.456 [2024-07-24 20:02:24.887325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.456 [2024-07-24 20:02:24.887369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.456 [2024-07-24 20:02:24.887391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.456 [2024-07-24 20:02:24.887976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.456 [2024-07-24 20:02:24.888186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.456 [2024-07-24 20:02:24.888196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.456 [2024-07-24 20:02:24.888202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.456 [2024-07-24 20:02:24.890941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.456 [2024-07-24 20:02:24.899750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.456 [2024-07-24 20:02:24.900440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.456 [2024-07-24 20:02:24.900483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.456 [2024-07-24 20:02:24.900505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.456 [2024-07-24 20:02:24.900932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.456 [2024-07-24 20:02:24.901110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.456 [2024-07-24 20:02:24.901120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.456 [2024-07-24 20:02:24.901127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.456 [2024-07-24 20:02:24.903783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.456 [2024-07-24 20:02:24.912611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.456 [2024-07-24 20:02:24.913260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.456 [2024-07-24 20:02:24.913305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.456 [2024-07-24 20:02:24.913327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.456 [2024-07-24 20:02:24.913913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.457 [2024-07-24 20:02:24.914171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.457 [2024-07-24 20:02:24.914184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.457 [2024-07-24 20:02:24.914194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.457 [2024-07-24 20:02:24.918247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.457 [2024-07-24 20:02:24.926010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.457 [2024-07-24 20:02:24.926612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.457 [2024-07-24 20:02:24.926656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.457 [2024-07-24 20:02:24.926678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.457 [2024-07-24 20:02:24.926990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.457 [2024-07-24 20:02:24.927162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.457 [2024-07-24 20:02:24.927173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.457 [2024-07-24 20:02:24.927182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.457 [2024-07-24 20:02:24.929882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.457 [2024-07-24 20:02:24.938889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.457 [2024-07-24 20:02:24.939558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.457 [2024-07-24 20:02:24.939613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.457 [2024-07-24 20:02:24.939635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.457 [2024-07-24 20:02:24.940225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.457 [2024-07-24 20:02:24.940496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.457 [2024-07-24 20:02:24.940505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.457 [2024-07-24 20:02:24.940512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.457 [2024-07-24 20:02:24.943204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.457 [2024-07-24 20:02:24.951786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.457 [2024-07-24 20:02:24.952461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.457 [2024-07-24 20:02:24.952504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.457 [2024-07-24 20:02:24.952527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.457 [2024-07-24 20:02:24.952976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.457 [2024-07-24 20:02:24.953165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.457 [2024-07-24 20:02:24.953175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.457 [2024-07-24 20:02:24.953181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.457 [2024-07-24 20:02:24.955894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.457 [2024-07-24 20:02:24.964695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.457 [2024-07-24 20:02:24.965346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.457 [2024-07-24 20:02:24.965389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.457 [2024-07-24 20:02:24.965410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.457 [2024-07-24 20:02:24.965988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.457 [2024-07-24 20:02:24.966299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.457 [2024-07-24 20:02:24.966309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.457 [2024-07-24 20:02:24.966316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.457 [2024-07-24 20:02:24.969036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.457 [2024-07-24 20:02:24.977555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.457 [2024-07-24 20:02:24.978205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.457 [2024-07-24 20:02:24.978254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.457 [2024-07-24 20:02:24.978275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.457 [2024-07-24 20:02:24.978859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.457 [2024-07-24 20:02:24.979023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.457 [2024-07-24 20:02:24.979032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.457 [2024-07-24 20:02:24.979038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.457 [2024-07-24 20:02:24.981722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.457 [2024-07-24 20:02:24.990441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.457 [2024-07-24 20:02:24.991112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.457 [2024-07-24 20:02:24.991155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.457 [2024-07-24 20:02:24.991178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.457 [2024-07-24 20:02:24.991756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.457 [2024-07-24 20:02:24.992180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.457 [2024-07-24 20:02:24.992190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.457 [2024-07-24 20:02:24.992197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.457 [2024-07-24 20:02:24.994855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.457 [2024-07-24 20:02:25.003350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.457 [2024-07-24 20:02:25.004022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.457 [2024-07-24 20:02:25.004079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.457 [2024-07-24 20:02:25.004102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.457 [2024-07-24 20:02:25.004678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.457 [2024-07-24 20:02:25.005194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.457 [2024-07-24 20:02:25.005207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.457 [2024-07-24 20:02:25.005217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.457 [2024-07-24 20:02:25.009259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.457 [2024-07-24 20:02:25.016923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.457 [2024-07-24 20:02:25.017608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.457 [2024-07-24 20:02:25.017652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.457 [2024-07-24 20:02:25.017674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.457 [2024-07-24 20:02:25.017949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.457 [2024-07-24 20:02:25.018143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.457 [2024-07-24 20:02:25.018153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.457 [2024-07-24 20:02:25.018160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.457 [2024-07-24 20:02:25.020855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.457 [2024-07-24 20:02:25.030010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.457 [2024-07-24 20:02:25.030691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.457 [2024-07-24 20:02:25.030707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.458 [2024-07-24 20:02:25.030714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.458 [2024-07-24 20:02:25.030885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.458 [2024-07-24 20:02:25.031062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.458 [2024-07-24 20:02:25.031088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.458 [2024-07-24 20:02:25.031095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.458 [2024-07-24 20:02:25.033898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.458 [2024-07-24 20:02:25.043010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.458 [2024-07-24 20:02:25.043701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.458 [2024-07-24 20:02:25.043745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.458 [2024-07-24 20:02:25.043766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.458 [2024-07-24 20:02:25.044288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.458 [2024-07-24 20:02:25.044461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.458 [2024-07-24 20:02:25.044470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.458 [2024-07-24 20:02:25.044477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.458 [2024-07-24 20:02:25.047297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.719 [2024-07-24 20:02:25.055890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.719 [2024-07-24 20:02:25.056521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.719 [2024-07-24 20:02:25.056567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.719 [2024-07-24 20:02:25.056590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.719 [2024-07-24 20:02:25.057185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.719 [2024-07-24 20:02:25.057736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.719 [2024-07-24 20:02:25.057746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.719 [2024-07-24 20:02:25.057753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.719 [2024-07-24 20:02:25.060439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.719 [2024-07-24 20:02:25.068746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.719 [2024-07-24 20:02:25.069411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.719 [2024-07-24 20:02:25.069456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.719 [2024-07-24 20:02:25.069478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.719 [2024-07-24 20:02:25.070067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.719 [2024-07-24 20:02:25.070610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.719 [2024-07-24 20:02:25.070620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.719 [2024-07-24 20:02:25.070626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.719 [2024-07-24 20:02:25.073358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.719 [2024-07-24 20:02:25.081691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.719 [2024-07-24 20:02:25.082387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.719 [2024-07-24 20:02:25.082432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.719 [2024-07-24 20:02:25.082454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.719 [2024-07-24 20:02:25.083034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.719 [2024-07-24 20:02:25.083450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.719 [2024-07-24 20:02:25.083460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.719 [2024-07-24 20:02:25.083466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.719 [2024-07-24 20:02:25.086155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.719 [2024-07-24 20:02:25.094599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.719 [2024-07-24 20:02:25.095267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.719 [2024-07-24 20:02:25.095311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.719 [2024-07-24 20:02:25.095334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.719 [2024-07-24 20:02:25.095911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.719 [2024-07-24 20:02:25.096413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.719 [2024-07-24 20:02:25.096424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.719 [2024-07-24 20:02:25.096430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.719 [2024-07-24 20:02:25.099083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.719 [2024-07-24 20:02:25.107664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.719 [2024-07-24 20:02:25.108346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.719 [2024-07-24 20:02:25.108390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.719 [2024-07-24 20:02:25.108427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.719 [2024-07-24 20:02:25.109004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.719 [2024-07-24 20:02:25.109243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.719 [2024-07-24 20:02:25.109252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.719 [2024-07-24 20:02:25.109258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.719 [2024-07-24 20:02:25.111861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.719 [2024-07-24 20:02:25.120706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.719 [2024-07-24 20:02:25.121343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.719 [2024-07-24 20:02:25.121361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.719 [2024-07-24 20:02:25.121368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.719 [2024-07-24 20:02:25.121541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.719 [2024-07-24 20:02:25.121714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.719 [2024-07-24 20:02:25.121723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.719 [2024-07-24 20:02:25.121730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.719 [2024-07-24 20:02:25.124365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.719 [2024-07-24 20:02:25.133638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.719 [2024-07-24 20:02:25.134255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.719 [2024-07-24 20:02:25.134272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.719 [2024-07-24 20:02:25.134279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.719 [2024-07-24 20:02:25.134450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.719 [2024-07-24 20:02:25.134623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.719 [2024-07-24 20:02:25.134633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.719 [2024-07-24 20:02:25.134639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.719 [2024-07-24 20:02:25.137284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.719 [2024-07-24 20:02:25.146548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.719 [2024-07-24 20:02:25.147240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.719 [2024-07-24 20:02:25.147284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.719 [2024-07-24 20:02:25.147306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.719 [2024-07-24 20:02:25.147875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.719 [2024-07-24 20:02:25.148040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.719 [2024-07-24 20:02:25.148059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.719 [2024-07-24 20:02:25.148066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.720 [2024-07-24 20:02:25.150735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.720 [2024-07-24 20:02:25.159596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.720 [2024-07-24 20:02:25.160263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.720 [2024-07-24 20:02:25.160306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.720 [2024-07-24 20:02:25.160329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.720 [2024-07-24 20:02:25.160907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.720 [2024-07-24 20:02:25.161501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.720 [2024-07-24 20:02:25.161527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.720 [2024-07-24 20:02:25.161554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.720 [2024-07-24 20:02:25.164298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.720 [2024-07-24 20:02:25.172576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.720 [2024-07-24 20:02:25.173221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.720 [2024-07-24 20:02:25.173264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.720 [2024-07-24 20:02:25.173287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.720 [2024-07-24 20:02:25.173542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.720 [2024-07-24 20:02:25.173706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.720 [2024-07-24 20:02:25.173716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.720 [2024-07-24 20:02:25.173722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.720 [2024-07-24 20:02:25.176401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.720 [2024-07-24 20:02:25.185583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.720 [2024-07-24 20:02:25.186229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.720 [2024-07-24 20:02:25.186274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.720 [2024-07-24 20:02:25.186297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.720 [2024-07-24 20:02:25.186875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.720 [2024-07-24 20:02:25.187464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.720 [2024-07-24 20:02:25.187491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.720 [2024-07-24 20:02:25.187510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.720 [2024-07-24 20:02:25.190271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.720 [2024-07-24 20:02:25.198573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.720 [2024-07-24 20:02:25.199234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.720 [2024-07-24 20:02:25.199278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.720 [2024-07-24 20:02:25.199301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.720 [2024-07-24 20:02:25.199879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.720 [2024-07-24 20:02:25.200152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.720 [2024-07-24 20:02:25.200161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.720 [2024-07-24 20:02:25.200167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.720 [2024-07-24 20:02:25.202810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.720 [2024-07-24 20:02:25.211440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.720 [2024-07-24 20:02:25.212103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.720 [2024-07-24 20:02:25.212146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.720 [2024-07-24 20:02:25.212169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.720 [2024-07-24 20:02:25.212541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.720 [2024-07-24 20:02:25.212705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.720 [2024-07-24 20:02:25.212714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.720 [2024-07-24 20:02:25.212721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.720 [2024-07-24 20:02:25.215476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.720 [2024-07-24 20:02:25.224461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.720 [2024-07-24 20:02:25.225026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.720 [2024-07-24 20:02:25.225049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.720 [2024-07-24 20:02:25.225057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.720 [2024-07-24 20:02:25.225228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.720 [2024-07-24 20:02:25.225409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.720 [2024-07-24 20:02:25.225418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.720 [2024-07-24 20:02:25.225424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.720 [2024-07-24 20:02:25.228012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.720 [2024-07-24 20:02:25.237478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.720 [2024-07-24 20:02:25.238148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.720 [2024-07-24 20:02:25.238190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.720 [2024-07-24 20:02:25.238213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.720 [2024-07-24 20:02:25.238787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.720 [2024-07-24 20:02:25.238950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.720 [2024-07-24 20:02:25.238960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.720 [2024-07-24 20:02:25.238967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.720 [2024-07-24 20:02:25.241597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.720 [2024-07-24 20:02:25.250412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.720 [2024-07-24 20:02:25.251100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.720 [2024-07-24 20:02:25.251143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.720 [2024-07-24 20:02:25.251166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.720 [2024-07-24 20:02:25.251565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.720 [2024-07-24 20:02:25.251729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.720 [2024-07-24 20:02:25.251739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.720 [2024-07-24 20:02:25.251745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.720 [2024-07-24 20:02:25.254371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.720 [2024-07-24 20:02:25.263337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.720 [2024-07-24 20:02:25.263935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.720 [2024-07-24 20:02:25.263977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.720 [2024-07-24 20:02:25.263999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.720 [2024-07-24 20:02:25.264483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.720 [2024-07-24 20:02:25.264649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.720 [2024-07-24 20:02:25.264659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.721 [2024-07-24 20:02:25.264665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.721 [2024-07-24 20:02:25.267357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.721 [2024-07-24 20:02:25.276261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.721 [2024-07-24 20:02:25.276847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.721 [2024-07-24 20:02:25.276864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.721 [2024-07-24 20:02:25.276871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.721 [2024-07-24 20:02:25.277047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.721 [2024-07-24 20:02:25.277220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.721 [2024-07-24 20:02:25.277230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.721 [2024-07-24 20:02:25.277241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.721 [2024-07-24 20:02:25.280071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.721 [2024-07-24 20:02:25.289348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.721 [2024-07-24 20:02:25.289895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.721 [2024-07-24 20:02:25.289937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.721 [2024-07-24 20:02:25.289959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.721 [2024-07-24 20:02:25.290549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.721 [2024-07-24 20:02:25.291033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.721 [2024-07-24 20:02:25.291047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.721 [2024-07-24 20:02:25.291054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.721 [2024-07-24 20:02:25.293794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.721 [2024-07-24 20:02:25.302372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.721 [2024-07-24 20:02:25.302957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.721 [2024-07-24 20:02:25.302999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.721 [2024-07-24 20:02:25.303021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.721 [2024-07-24 20:02:25.303611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.721 [2024-07-24 20:02:25.304039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.721 [2024-07-24 20:02:25.304054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.721 [2024-07-24 20:02:25.304061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.721 [2024-07-24 20:02:25.306829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.982 [2024-07-24 20:02:25.315536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.982 [2024-07-24 20:02:25.316182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-07-24 20:02:25.316225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.982 [2024-07-24 20:02:25.316247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.982 [2024-07-24 20:02:25.316825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.982 [2024-07-24 20:02:25.317116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.982 [2024-07-24 20:02:25.317126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.982 [2024-07-24 20:02:25.317133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.982 [2024-07-24 20:02:25.319834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.982 [2024-07-24 20:02:25.328462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.982 [2024-07-24 20:02:25.329064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-07-24 20:02:25.329115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.982 [2024-07-24 20:02:25.329138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.982 [2024-07-24 20:02:25.329716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.982 [2024-07-24 20:02:25.330295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.982 [2024-07-24 20:02:25.330318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.982 [2024-07-24 20:02:25.330325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.982 [2024-07-24 20:02:25.332906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.982 [2024-07-24 20:02:25.341389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.982 [2024-07-24 20:02:25.342074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-07-24 20:02:25.342118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.982 [2024-07-24 20:02:25.342141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.982 [2024-07-24 20:02:25.342719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.982 [2024-07-24 20:02:25.343320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.982 [2024-07-24 20:02:25.343330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.982 [2024-07-24 20:02:25.343336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.982 [2024-07-24 20:02:25.346002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.982 [2024-07-24 20:02:25.354330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.982 [2024-07-24 20:02:25.355037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-07-24 20:02:25.355095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.982 [2024-07-24 20:02:25.355116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.982 [2024-07-24 20:02:25.355695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.982 [2024-07-24 20:02:25.356051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.982 [2024-07-24 20:02:25.356061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.982 [2024-07-24 20:02:25.356068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.982 [2024-07-24 20:02:25.358678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.982 [2024-07-24 20:02:25.367224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.982 [2024-07-24 20:02:25.367753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-07-24 20:02:25.367769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.982 [2024-07-24 20:02:25.367776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.982 [2024-07-24 20:02:25.367949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.982 [2024-07-24 20:02:25.368133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.982 [2024-07-24 20:02:25.368143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.982 [2024-07-24 20:02:25.368150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.982 [2024-07-24 20:02:25.370762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.982 [2024-07-24 20:02:25.380154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.982 [2024-07-24 20:02:25.380715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-07-24 20:02:25.380761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.982 [2024-07-24 20:02:25.380785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.982 [2024-07-24 20:02:25.381224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.982 [2024-07-24 20:02:25.381390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.982 [2024-07-24 20:02:25.381399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.982 [2024-07-24 20:02:25.381405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.982 [2024-07-24 20:02:25.384121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.982 [2024-07-24 20:02:25.393129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.982 [2024-07-24 20:02:25.393838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-07-24 20:02:25.393881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.982 [2024-07-24 20:02:25.393903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.982 [2024-07-24 20:02:25.394446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.982 [2024-07-24 20:02:25.394610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.982 [2024-07-24 20:02:25.394620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.982 [2024-07-24 20:02:25.394626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.982 [2024-07-24 20:02:25.397245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.983 [2024-07-24 20:02:25.406040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.983 [2024-07-24 20:02:25.406649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-07-24 20:02:25.406666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.983 [2024-07-24 20:02:25.406673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.983 [2024-07-24 20:02:25.406845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.983 [2024-07-24 20:02:25.407017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.983 [2024-07-24 20:02:25.407027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.983 [2024-07-24 20:02:25.407033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.983 [2024-07-24 20:02:25.409897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.983 [2024-07-24 20:02:25.418925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.983 [2024-07-24 20:02:25.419483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-07-24 20:02:25.419528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.983 [2024-07-24 20:02:25.419551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.983 [2024-07-24 20:02:25.420075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.983 [2024-07-24 20:02:25.420248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.983 [2024-07-24 20:02:25.420258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.983 [2024-07-24 20:02:25.420265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.983 [2024-07-24 20:02:25.422864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.983 [2024-07-24 20:02:25.431822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.983 [2024-07-24 20:02:25.432512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-07-24 20:02:25.432530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.983 [2024-07-24 20:02:25.432537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.983 [2024-07-24 20:02:25.432708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.983 [2024-07-24 20:02:25.432881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.983 [2024-07-24 20:02:25.432890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.983 [2024-07-24 20:02:25.432896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.983 [2024-07-24 20:02:25.435571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.983 [2024-07-24 20:02:25.444789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.983 [2024-07-24 20:02:25.445383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-07-24 20:02:25.445400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.983 [2024-07-24 20:02:25.445408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.983 [2024-07-24 20:02:25.445570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.983 [2024-07-24 20:02:25.445732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.983 [2024-07-24 20:02:25.445742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.983 [2024-07-24 20:02:25.445748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.983 [2024-07-24 20:02:25.448388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.983 [2024-07-24 20:02:25.457652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.983 [2024-07-24 20:02:25.458300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-07-24 20:02:25.458330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.983 [2024-07-24 20:02:25.458368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.983 [2024-07-24 20:02:25.458943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.983 [2024-07-24 20:02:25.459204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.983 [2024-07-24 20:02:25.459217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.983 [2024-07-24 20:02:25.459226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.983 [2024-07-24 20:02:25.463283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.983 [2024-07-24 20:02:25.471105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.983 [2024-07-24 20:02:25.471648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-07-24 20:02:25.471705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.983 [2024-07-24 20:02:25.471727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.983 [2024-07-24 20:02:25.472279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.983 [2024-07-24 20:02:25.472454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.983 [2024-07-24 20:02:25.472464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.983 [2024-07-24 20:02:25.472471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.983 [2024-07-24 20:02:25.475189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.983 [2024-07-24 20:02:25.484073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.983 [2024-07-24 20:02:25.484651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-07-24 20:02:25.484695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.983 [2024-07-24 20:02:25.484716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.983 [2024-07-24 20:02:25.485170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.983 [2024-07-24 20:02:25.485343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.983 [2024-07-24 20:02:25.485353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.983 [2024-07-24 20:02:25.485360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.983 [2024-07-24 20:02:25.488008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.983 [2024-07-24 20:02:25.496955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.983 [2024-07-24 20:02:25.497611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-07-24 20:02:25.497654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.983 [2024-07-24 20:02:25.497677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.983 [2024-07-24 20:02:25.498266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.983 [2024-07-24 20:02:25.498723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.983 [2024-07-24 20:02:25.498737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.983 [2024-07-24 20:02:25.498743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.983 [2024-07-24 20:02:25.501364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.983 [2024-07-24 20:02:25.509818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.983 [2024-07-24 20:02:25.510497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-07-24 20:02:25.510541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.983 [2024-07-24 20:02:25.510564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.983 [2024-07-24 20:02:25.511109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.983 [2024-07-24 20:02:25.511283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.983 [2024-07-24 20:02:25.511305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.983 [2024-07-24 20:02:25.511311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.984 [2024-07-24 20:02:25.513898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.984 [2024-07-24 20:02:25.522662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.984 [2024-07-24 20:02:25.523322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-07-24 20:02:25.523365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.984 [2024-07-24 20:02:25.523388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.984 [2024-07-24 20:02:25.523676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.984 [2024-07-24 20:02:25.523851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.984 [2024-07-24 20:02:25.523863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.984 [2024-07-24 20:02:25.523869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.984 [2024-07-24 20:02:25.526490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.984 [2024-07-24 20:02:25.535809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.984 [2024-07-24 20:02:25.536444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-07-24 20:02:25.536462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.984 [2024-07-24 20:02:25.536470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.984 [2024-07-24 20:02:25.536647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.984 [2024-07-24 20:02:25.536834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.984 [2024-07-24 20:02:25.536844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.984 [2024-07-24 20:02:25.536851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.984 [2024-07-24 20:02:25.539625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.984 [2024-07-24 20:02:25.548882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.984 [2024-07-24 20:02:25.549570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-07-24 20:02:25.549613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.984 [2024-07-24 20:02:25.549635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.984 [2024-07-24 20:02:25.550224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.984 [2024-07-24 20:02:25.550806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.984 [2024-07-24 20:02:25.550833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.984 [2024-07-24 20:02:25.550839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.984 [2024-07-24 20:02:25.554778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.984 [2024-07-24 20:02:25.562338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.984 [2024-07-24 20:02:25.562980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-07-24 20:02:25.563024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.984 [2024-07-24 20:02:25.563061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.984 [2024-07-24 20:02:25.563454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.984 [2024-07-24 20:02:25.563627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.984 [2024-07-24 20:02:25.563636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.984 [2024-07-24 20:02:25.563643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.984 [2024-07-24 20:02:25.566356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.984 [2024-07-24 20:02:25.575421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.984 [2024-07-24 20:02:25.576138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-07-24 20:02:25.576183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:33.984 [2024-07-24 20:02:25.576204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:33.984 [2024-07-24 20:02:25.576663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:33.984 [2024-07-24 20:02:25.576836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.984 [2024-07-24 20:02:25.576846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.984 [2024-07-24 20:02:25.576853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.246 [2024-07-24 20:02:25.579624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.246 [2024-07-24 20:02:25.588311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.246 [2024-07-24 20:02:25.588912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.246 [2024-07-24 20:02:25.588955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.246 [2024-07-24 20:02:25.588977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.246 [2024-07-24 20:02:25.589411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.246 [2024-07-24 20:02:25.589575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.246 [2024-07-24 20:02:25.589585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.246 [2024-07-24 20:02:25.589591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.246 [2024-07-24 20:02:25.592320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.246 [2024-07-24 20:02:25.601196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.246 [2024-07-24 20:02:25.601871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.246 [2024-07-24 20:02:25.601913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.246 [2024-07-24 20:02:25.601935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.246 [2024-07-24 20:02:25.602529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.246 [2024-07-24 20:02:25.602988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.246 [2024-07-24 20:02:25.602998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.246 [2024-07-24 20:02:25.603005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.246 [2024-07-24 20:02:25.605620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.246 [2024-07-24 20:02:25.614106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.246 [2024-07-24 20:02:25.614780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.246 [2024-07-24 20:02:25.614823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.246 [2024-07-24 20:02:25.614845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.246 [2024-07-24 20:02:25.615440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.246 [2024-07-24 20:02:25.615856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.246 [2024-07-24 20:02:25.615866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.246 [2024-07-24 20:02:25.615873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.246 [2024-07-24 20:02:25.618505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.246 [2024-07-24 20:02:25.626992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.246 [2024-07-24 20:02:25.627564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.246 [2024-07-24 20:02:25.627607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.246 [2024-07-24 20:02:25.627629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.246 [2024-07-24 20:02:25.627954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.246 [2024-07-24 20:02:25.628141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.246 [2024-07-24 20:02:25.628151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.246 [2024-07-24 20:02:25.628161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.246 [2024-07-24 20:02:25.630854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.246 [2024-07-24 20:02:25.639945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.246 [2024-07-24 20:02:25.640597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.246 [2024-07-24 20:02:25.640640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.246 [2024-07-24 20:02:25.640663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.246 [2024-07-24 20:02:25.640935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.246 [2024-07-24 20:02:25.641122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.246 [2024-07-24 20:02:25.641132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.246 [2024-07-24 20:02:25.641138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.246 [2024-07-24 20:02:25.643794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.246 [2024-07-24 20:02:25.652765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.246 [2024-07-24 20:02:25.653421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.247 [2024-07-24 20:02:25.653465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.247 [2024-07-24 20:02:25.653487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.247 [2024-07-24 20:02:25.653784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.247 [2024-07-24 20:02:25.653948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.247 [2024-07-24 20:02:25.653957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.247 [2024-07-24 20:02:25.653963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.247 [2024-07-24 20:02:25.656645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.247 [2024-07-24 20:02:25.665599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.247 [2024-07-24 20:02:25.666269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.247 [2024-07-24 20:02:25.666285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.247 [2024-07-24 20:02:25.666293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.247 [2024-07-24 20:02:25.666472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.247 [2024-07-24 20:02:25.666643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.247 [2024-07-24 20:02:25.666653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.247 [2024-07-24 20:02:25.666659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.247 [2024-07-24 20:02:25.669347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.247 [2024-07-24 20:02:25.678457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.247 [2024-07-24 20:02:25.679059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.247 [2024-07-24 20:02:25.679109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.247 [2024-07-24 20:02:25.679131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.247 [2024-07-24 20:02:25.679542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.247 [2024-07-24 20:02:25.679705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.247 [2024-07-24 20:02:25.679715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.247 [2024-07-24 20:02:25.679721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.247 [2024-07-24 20:02:25.682329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.247 [2024-07-24 20:02:25.691340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.247 [2024-07-24 20:02:25.691830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.247 [2024-07-24 20:02:25.691873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.247 [2024-07-24 20:02:25.691896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.247 [2024-07-24 20:02:25.692492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.247 [2024-07-24 20:02:25.693082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.247 [2024-07-24 20:02:25.693109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.247 [2024-07-24 20:02:25.693129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.247 [2024-07-24 20:02:25.695805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.247 [2024-07-24 20:02:25.704149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.247 [2024-07-24 20:02:25.704828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.247 [2024-07-24 20:02:25.704871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.247 [2024-07-24 20:02:25.704892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.247 [2024-07-24 20:02:25.705489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.247 [2024-07-24 20:02:25.705943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.247 [2024-07-24 20:02:25.705952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.247 [2024-07-24 20:02:25.705959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.247 [2024-07-24 20:02:25.708576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.247 [2024-07-24 20:02:25.717120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.247 [2024-07-24 20:02:25.717715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.247 [2024-07-24 20:02:25.717758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.247 [2024-07-24 20:02:25.717780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.247 [2024-07-24 20:02:25.718369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.247 [2024-07-24 20:02:25.718914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.247 [2024-07-24 20:02:25.718923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.247 [2024-07-24 20:02:25.718929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.247 [2024-07-24 20:02:25.721658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.247 [2024-07-24 20:02:25.730097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.247 [2024-07-24 20:02:25.730771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.247 [2024-07-24 20:02:25.730813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.247 [2024-07-24 20:02:25.730836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.247 [2024-07-24 20:02:25.731426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.247 [2024-07-24 20:02:25.731984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.247 [2024-07-24 20:02:25.731996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.247 [2024-07-24 20:02:25.732006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.247 [2024-07-24 20:02:25.736057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.247 [2024-07-24 20:02:25.743725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.247 [2024-07-24 20:02:25.744399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.247 [2024-07-24 20:02:25.744442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.247 [2024-07-24 20:02:25.744465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.247 [2024-07-24 20:02:25.744920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.247 [2024-07-24 20:02:25.745115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.247 [2024-07-24 20:02:25.745126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.247 [2024-07-24 20:02:25.745132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.247 [2024-07-24 20:02:25.747843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.247 [2024-07-24 20:02:25.756579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.247 [2024-07-24 20:02:25.757184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.247 [2024-07-24 20:02:25.757226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.247 [2024-07-24 20:02:25.757249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.247 [2024-07-24 20:02:25.757827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.247 [2024-07-24 20:02:25.758158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.247 [2024-07-24 20:02:25.758168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.247 [2024-07-24 20:02:25.758175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.247 [2024-07-24 20:02:25.760837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.247 [2024-07-24 20:02:25.769484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.247 [2024-07-24 20:02:25.770157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.247 [2024-07-24 20:02:25.770200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.247 [2024-07-24 20:02:25.770222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.247 [2024-07-24 20:02:25.770799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.247 [2024-07-24 20:02:25.770978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.247 [2024-07-24 20:02:25.770987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.247 [2024-07-24 20:02:25.770993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.248 [2024-07-24 20:02:25.773677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.248 [2024-07-24 20:02:25.782328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.248 [2024-07-24 20:02:25.783010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.248 [2024-07-24 20:02:25.783026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.248 [2024-07-24 20:02:25.783034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.248 [2024-07-24 20:02:25.783231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.248 [2024-07-24 20:02:25.783408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.248 [2024-07-24 20:02:25.783418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.248 [2024-07-24 20:02:25.783425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.248 [2024-07-24 20:02:25.786251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.248 [2024-07-24 20:02:25.795375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.248 [2024-07-24 20:02:25.796059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.248 [2024-07-24 20:02:25.796102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.248 [2024-07-24 20:02:25.796125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.248 [2024-07-24 20:02:25.796702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.248 [2024-07-24 20:02:25.797100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.248 [2024-07-24 20:02:25.797110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.248 [2024-07-24 20:02:25.797116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.248 [2024-07-24 20:02:25.799853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.248 [2024-07-24 20:02:25.808233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.248 [2024-07-24 20:02:25.808913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.248 [2024-07-24 20:02:25.808956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.248 [2024-07-24 20:02:25.808985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.248 [2024-07-24 20:02:25.809580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.248 [2024-07-24 20:02:25.810138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.248 [2024-07-24 20:02:25.810148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.248 [2024-07-24 20:02:25.810154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.248 [2024-07-24 20:02:25.812760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.248 [2024-07-24 20:02:25.821046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.248 [2024-07-24 20:02:25.821703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.248 [2024-07-24 20:02:25.821745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.248 [2024-07-24 20:02:25.821767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.248 [2024-07-24 20:02:25.822306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.248 [2024-07-24 20:02:25.822560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.248 [2024-07-24 20:02:25.822573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.248 [2024-07-24 20:02:25.822582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.248 [2024-07-24 20:02:25.826632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.248 [2024-07-24 20:02:25.834487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.248 [2024-07-24 20:02:25.835146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.248 [2024-07-24 20:02:25.835190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.248 [2024-07-24 20:02:25.835212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.248 [2024-07-24 20:02:25.835790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.248 [2024-07-24 20:02:25.836232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.248 [2024-07-24 20:02:25.836242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.248 [2024-07-24 20:02:25.836248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.248 [2024-07-24 20:02:25.839038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.509 [2024-07-24 20:02:25.847364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.509 [2024-07-24 20:02:25.848008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.509 [2024-07-24 20:02:25.848061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.509 [2024-07-24 20:02:25.848086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.509 [2024-07-24 20:02:25.848375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.509 [2024-07-24 20:02:25.848549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.509 [2024-07-24 20:02:25.848561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.509 [2024-07-24 20:02:25.848569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.509 [2024-07-24 20:02:25.851287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.509 [2024-07-24 20:02:25.860245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.509 [2024-07-24 20:02:25.860925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.509 [2024-07-24 20:02:25.860967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.509 [2024-07-24 20:02:25.860989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.509 [2024-07-24 20:02:25.861433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.509 [2024-07-24 20:02:25.861604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.509 [2024-07-24 20:02:25.861612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.509 [2024-07-24 20:02:25.861618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.509 [2024-07-24 20:02:25.864305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.509 [2024-07-24 20:02:25.873363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.509 [2024-07-24 20:02:25.874021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.509 [2024-07-24 20:02:25.874074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.509 [2024-07-24 20:02:25.874097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.509 [2024-07-24 20:02:25.874474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.509 [2024-07-24 20:02:25.874650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.509 [2024-07-24 20:02:25.874659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.509 [2024-07-24 20:02:25.874665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.509 [2024-07-24 20:02:25.877376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.509 [2024-07-24 20:02:25.886225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.509 [2024-07-24 20:02:25.886896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.509 [2024-07-24 20:02:25.886937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.509 [2024-07-24 20:02:25.886958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.509 [2024-07-24 20:02:25.887372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.509 [2024-07-24 20:02:25.887544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.509 [2024-07-24 20:02:25.887552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.509 [2024-07-24 20:02:25.887558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.509 [2024-07-24 20:02:25.890218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.509 [2024-07-24 20:02:25.899133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.509 [2024-07-24 20:02:25.899797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.509 [2024-07-24 20:02:25.899840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.509 [2024-07-24 20:02:25.899862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.509 [2024-07-24 20:02:25.900453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.509 [2024-07-24 20:02:25.900787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.510 [2024-07-24 20:02:25.900794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.510 [2024-07-24 20:02:25.900801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.510 [2024-07-24 20:02:25.903425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.510 [2024-07-24 20:02:25.912007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.510 [2024-07-24 20:02:25.912722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.510 [2024-07-24 20:02:25.912765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.510 [2024-07-24 20:02:25.912787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.510 [2024-07-24 20:02:25.913379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.510 [2024-07-24 20:02:25.913842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.510 [2024-07-24 20:02:25.913853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.510 [2024-07-24 20:02:25.913862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.510 [2024-07-24 20:02:25.917900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.510 [2024-07-24 20:02:25.925799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.510 [2024-07-24 20:02:25.926445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.510 [2024-07-24 20:02:25.926461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.510 [2024-07-24 20:02:25.926468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.510 [2024-07-24 20:02:25.926634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.510 [2024-07-24 20:02:25.926801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.510 [2024-07-24 20:02:25.926808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.510 [2024-07-24 20:02:25.926814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.510 [2024-07-24 20:02:25.929542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.510 [2024-07-24 20:02:25.938701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.510 [2024-07-24 20:02:25.939350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.510 [2024-07-24 20:02:25.939391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.510 [2024-07-24 20:02:25.939414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.510 [2024-07-24 20:02:25.939697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.510 [2024-07-24 20:02:25.939868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.510 [2024-07-24 20:02:25.939876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.510 [2024-07-24 20:02:25.939883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.510 [2024-07-24 20:02:25.942504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.510 [2024-07-24 20:02:25.951619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.510 [2024-07-24 20:02:25.952222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.510 [2024-07-24 20:02:25.952264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.510 [2024-07-24 20:02:25.952285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.510 [2024-07-24 20:02:25.952649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.510 [2024-07-24 20:02:25.952811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.510 [2024-07-24 20:02:25.952818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.510 [2024-07-24 20:02:25.952824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.510 [2024-07-24 20:02:25.955511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.510 [2024-07-24 20:02:25.964587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.510 [2024-07-24 20:02:25.965272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.510 [2024-07-24 20:02:25.965314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.510 [2024-07-24 20:02:25.965336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.510 [2024-07-24 20:02:25.965674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.510 [2024-07-24 20:02:25.965845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.510 [2024-07-24 20:02:25.965852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.510 [2024-07-24 20:02:25.965859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.510 [2024-07-24 20:02:25.968628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.510 [2024-07-24 20:02:25.977480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.510 [2024-07-24 20:02:25.978155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.510 [2024-07-24 20:02:25.978199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.510 [2024-07-24 20:02:25.978221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.510 [2024-07-24 20:02:25.978569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.510 [2024-07-24 20:02:25.978732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.510 [2024-07-24 20:02:25.978739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.510 [2024-07-24 20:02:25.978748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.510 [2024-07-24 20:02:25.981430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.510 [2024-07-24 20:02:25.990373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.510 [2024-07-24 20:02:25.991055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.510 [2024-07-24 20:02:25.991071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.510 [2024-07-24 20:02:25.991078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.510 [2024-07-24 20:02:25.991249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.510 [2024-07-24 20:02:25.991426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.510 [2024-07-24 20:02:25.991433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.510 [2024-07-24 20:02:25.991439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.510 [2024-07-24 20:02:25.994020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.510 [2024-07-24 20:02:26.003327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.510 [2024-07-24 20:02:26.003998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.510 [2024-07-24 20:02:26.004040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.510 [2024-07-24 20:02:26.004078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.510 [2024-07-24 20:02:26.004575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.510 [2024-07-24 20:02:26.004802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.510 [2024-07-24 20:02:26.004813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.510 [2024-07-24 20:02:26.004822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.510 [2024-07-24 20:02:26.008862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.510 [2024-07-24 20:02:26.016764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.510 [2024-07-24 20:02:26.017388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.510 [2024-07-24 20:02:26.017431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.510 [2024-07-24 20:02:26.017453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.510 [2024-07-24 20:02:26.018006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.510 [2024-07-24 20:02:26.018198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.510 [2024-07-24 20:02:26.018210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.510 [2024-07-24 20:02:26.018216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.510 [2024-07-24 20:02:26.020918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.510 [2024-07-24 20:02:26.029589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.510 [2024-07-24 20:02:26.030233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.510 [2024-07-24 20:02:26.030248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.511 [2024-07-24 20:02:26.030255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.511 [2024-07-24 20:02:26.030426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.511 [2024-07-24 20:02:26.030597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.511 [2024-07-24 20:02:26.030604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.511 [2024-07-24 20:02:26.030611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.511 [2024-07-24 20:02:26.033249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.511 [2024-07-24 20:02:26.042629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.511 [2024-07-24 20:02:26.043303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.511 [2024-07-24 20:02:26.043342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.511 [2024-07-24 20:02:26.043364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.511 [2024-07-24 20:02:26.043890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.511 [2024-07-24 20:02:26.044066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.511 [2024-07-24 20:02:26.044074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.511 [2024-07-24 20:02:26.044080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.511 [2024-07-24 20:02:26.046822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.511 [2024-07-24 20:02:26.055558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.511 [2024-07-24 20:02:26.056259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.511 [2024-07-24 20:02:26.056301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.511 [2024-07-24 20:02:26.056323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.511 [2024-07-24 20:02:26.056837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.511 [2024-07-24 20:02:26.057008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.511 [2024-07-24 20:02:26.057015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.511 [2024-07-24 20:02:26.057022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.511 [2024-07-24 20:02:26.059823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.511 [2024-07-24 20:02:26.068450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.511 [2024-07-24 20:02:26.069126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.511 [2024-07-24 20:02:26.069168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.511 [2024-07-24 20:02:26.069189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.511 [2024-07-24 20:02:26.069766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.511 [2024-07-24 20:02:26.069976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.511 [2024-07-24 20:02:26.069984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.511 [2024-07-24 20:02:26.069989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.511 [2024-07-24 20:02:26.072675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.511 [2024-07-24 20:02:26.081318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.511 [2024-07-24 20:02:26.081963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.511 [2024-07-24 20:02:26.082005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.511 [2024-07-24 20:02:26.082027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.511 [2024-07-24 20:02:26.082617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.511 [2024-07-24 20:02:26.083161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.511 [2024-07-24 20:02:26.083169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.511 [2024-07-24 20:02:26.083175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.511 [2024-07-24 20:02:26.085776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.511 [2024-07-24 20:02:26.094101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.511 [2024-07-24 20:02:26.094767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.511 [2024-07-24 20:02:26.094809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.511 [2024-07-24 20:02:26.094830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.511 [2024-07-24 20:02:26.095421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.511 [2024-07-24 20:02:26.095693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.511 [2024-07-24 20:02:26.095705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.511 [2024-07-24 20:02:26.095714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.511 [2024-07-24 20:02:26.099788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.772 [2024-07-24 20:02:26.107667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.772 [2024-07-24 20:02:26.108244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.772 [2024-07-24 20:02:26.108288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.772 [2024-07-24 20:02:26.108310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.772 [2024-07-24 20:02:26.108798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.772 [2024-07-24 20:02:26.108965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.772 [2024-07-24 20:02:26.108973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.772 [2024-07-24 20:02:26.108979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.772 [2024-07-24 20:02:26.111823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.772 [2024-07-24 20:02:26.120655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.772 [2024-07-24 20:02:26.121146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.772 [2024-07-24 20:02:26.121188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.772 [2024-07-24 20:02:26.121209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.772 [2024-07-24 20:02:26.121600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.772 [2024-07-24 20:02:26.121761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.772 [2024-07-24 20:02:26.121768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.772 [2024-07-24 20:02:26.121774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.772 [2024-07-24 20:02:26.124460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.772 [2024-07-24 20:02:26.133512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.772 [2024-07-24 20:02:26.134076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.772 [2024-07-24 20:02:26.134119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.772 [2024-07-24 20:02:26.134141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.772 [2024-07-24 20:02:26.134598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.772 [2024-07-24 20:02:26.134770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.772 [2024-07-24 20:02:26.134777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.772 [2024-07-24 20:02:26.134784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.772 [2024-07-24 20:02:26.137412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.772 [2024-07-24 20:02:26.146377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.772 [2024-07-24 20:02:26.147062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.772 [2024-07-24 20:02:26.147105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.772 [2024-07-24 20:02:26.147126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.772 [2024-07-24 20:02:26.147704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.772 [2024-07-24 20:02:26.147996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.772 [2024-07-24 20:02:26.148004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.772 [2024-07-24 20:02:26.148010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.772 [2024-07-24 20:02:26.150748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.772 [2024-07-24 20:02:26.159244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.772 [2024-07-24 20:02:26.159918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.772 [2024-07-24 20:02:26.159960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.772 [2024-07-24 20:02:26.159989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.772 [2024-07-24 20:02:26.160582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.772 [2024-07-24 20:02:26.160875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.772 [2024-07-24 20:02:26.160882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.772 [2024-07-24 20:02:26.160889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.772 [2024-07-24 20:02:26.163562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.772 [2024-07-24 20:02:26.172051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.772 [2024-07-24 20:02:26.172726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.772 [2024-07-24 20:02:26.172768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.772 [2024-07-24 20:02:26.172789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.772 [2024-07-24 20:02:26.173381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.772 [2024-07-24 20:02:26.173954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.772 [2024-07-24 20:02:26.173962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.772 [2024-07-24 20:02:26.173968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.772 [2024-07-24 20:02:26.176583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.772 [2024-07-24 20:02:26.184915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.772 [2024-07-24 20:02:26.185594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.772 [2024-07-24 20:02:26.185637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.772 [2024-07-24 20:02:26.185659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.772 [2024-07-24 20:02:26.186087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.772 [2024-07-24 20:02:26.186260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.772 [2024-07-24 20:02:26.186267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.772 [2024-07-24 20:02:26.186273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.772 [2024-07-24 20:02:26.190151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.772 [2024-07-24 20:02:26.198523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.772 [2024-07-24 20:02:26.199167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.772 [2024-07-24 20:02:26.199204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.772 [2024-07-24 20:02:26.199226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.772 [2024-07-24 20:02:26.199756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.772 [2024-07-24 20:02:26.199923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.772 [2024-07-24 20:02:26.199933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.772 [2024-07-24 20:02:26.199939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.772 [2024-07-24 20:02:26.202664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.772 [2024-07-24 20:02:26.211499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.772 [2024-07-24 20:02:26.212163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.772 [2024-07-24 20:02:26.212179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.772 [2024-07-24 20:02:26.212186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.772 [2024-07-24 20:02:26.212356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.772 [2024-07-24 20:02:26.212528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.772 [2024-07-24 20:02:26.212535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.772 [2024-07-24 20:02:26.212542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.772 [2024-07-24 20:02:26.215185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.773 [2024-07-24 20:02:26.224317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.773 [2024-07-24 20:02:26.225004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.773 [2024-07-24 20:02:26.225060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.773 [2024-07-24 20:02:26.225083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.773 [2024-07-24 20:02:26.225659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.773 [2024-07-24 20:02:26.226211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.773 [2024-07-24 20:02:26.226218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.773 [2024-07-24 20:02:26.226225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.773 [2024-07-24 20:02:26.228880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.773 [2024-07-24 20:02:26.237165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.773 [2024-07-24 20:02:26.237808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.773 [2024-07-24 20:02:26.237823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.773 [2024-07-24 20:02:26.237830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.773 [2024-07-24 20:02:26.237991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.773 [2024-07-24 20:02:26.238179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.773 [2024-07-24 20:02:26.238187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.773 [2024-07-24 20:02:26.238194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.773 [2024-07-24 20:02:26.240851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.773 [2024-07-24 20:02:26.249978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.773 [2024-07-24 20:02:26.250546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.773 [2024-07-24 20:02:26.250561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.773 [2024-07-24 20:02:26.250568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.773 [2024-07-24 20:02:26.250729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.773 [2024-07-24 20:02:26.250891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.773 [2024-07-24 20:02:26.250898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.773 [2024-07-24 20:02:26.250904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.773 [2024-07-24 20:02:26.253588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.773 [2024-07-24 20:02:26.262840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.773 [2024-07-24 20:02:26.263521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.773 [2024-07-24 20:02:26.263563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.773 [2024-07-24 20:02:26.263584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.773 [2024-07-24 20:02:26.263986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.773 [2024-07-24 20:02:26.264174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.773 [2024-07-24 20:02:26.264182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.773 [2024-07-24 20:02:26.264188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.773 [2024-07-24 20:02:26.266846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.773 [2024-07-24 20:02:26.275694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.773 [2024-07-24 20:02:26.276342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.773 [2024-07-24 20:02:26.276387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.773 [2024-07-24 20:02:26.276409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.773 [2024-07-24 20:02:26.276966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.773 [2024-07-24 20:02:26.277226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.773 [2024-07-24 20:02:26.277237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.773 [2024-07-24 20:02:26.277246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.773 [2024-07-24 20:02:26.281288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.773 [2024-07-24 20:02:26.289010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.773 [2024-07-24 20:02:26.289653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.773 [2024-07-24 20:02:26.289695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.773 [2024-07-24 20:02:26.289717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.773 [2024-07-24 20:02:26.290315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.773 [2024-07-24 20:02:26.290587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.773 [2024-07-24 20:02:26.290595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.773 [2024-07-24 20:02:26.290602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.773 [2024-07-24 20:02:26.293423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.773 [2024-07-24 20:02:26.301969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.773 [2024-07-24 20:02:26.302673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.773 [2024-07-24 20:02:26.302715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.773 [2024-07-24 20:02:26.302737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.773 [2024-07-24 20:02:26.303126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.773 [2024-07-24 20:02:26.303312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.773 [2024-07-24 20:02:26.303320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.773 [2024-07-24 20:02:26.303326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.773 [2024-07-24 20:02:26.306049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.773 [2024-07-24 20:02:26.314949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.773 [2024-07-24 20:02:26.315664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.773 [2024-07-24 20:02:26.315707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.773 [2024-07-24 20:02:26.315728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.773 [2024-07-24 20:02:26.316162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.773 [2024-07-24 20:02:26.316339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.773 [2024-07-24 20:02:26.316346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.773 [2024-07-24 20:02:26.316353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.773 [2024-07-24 20:02:26.319129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.773 [2024-07-24 20:02:26.327778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.773 [2024-07-24 20:02:26.328454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.773 [2024-07-24 20:02:26.328496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.773 [2024-07-24 20:02:26.328518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.773 [2024-07-24 20:02:26.329113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.773 [2024-07-24 20:02:26.329454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.773 [2024-07-24 20:02:26.329462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.773 [2024-07-24 20:02:26.329471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.773 [2024-07-24 20:02:26.332169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.773 [2024-07-24 20:02:26.340663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.773 [2024-07-24 20:02:26.341334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.773 [2024-07-24 20:02:26.341350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.773 [2024-07-24 20:02:26.341356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.773 [2024-07-24 20:02:26.341518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.773 [2024-07-24 20:02:26.341680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.774 [2024-07-24 20:02:26.341687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.774 [2024-07-24 20:02:26.341693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.774 [2024-07-24 20:02:26.344376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.774 [2024-07-24 20:02:26.353545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.774 [2024-07-24 20:02:26.354219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.774 [2024-07-24 20:02:26.354260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:34.774 [2024-07-24 20:02:26.354282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:34.774 [2024-07-24 20:02:26.354860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:34.774 [2024-07-24 20:02:26.355416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.774 [2024-07-24 20:02:26.355424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.774 [2024-07-24 20:02:26.355430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.774 [2024-07-24 20:02:26.358172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.774 [2024-07-24 20:02:26.366599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.035 [2024-07-24 20:02:26.367271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.035 [2024-07-24 20:02:26.367317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.035 [2024-07-24 20:02:26.367339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.035 [2024-07-24 20:02:26.367709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.035 [2024-07-24 20:02:26.367886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.035 [2024-07-24 20:02:26.367893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.035 [2024-07-24 20:02:26.367900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.035 [2024-07-24 20:02:26.370634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.035 [2024-07-24 20:02:26.379491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.035 [2024-07-24 20:02:26.380185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.035 [2024-07-24 20:02:26.380231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.035 [2024-07-24 20:02:26.380254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.035 [2024-07-24 20:02:26.380833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.035 [2024-07-24 20:02:26.381032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.035 [2024-07-24 20:02:26.381039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.035 [2024-07-24 20:02:26.381050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.035 [2024-07-24 20:02:26.383729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.035 [2024-07-24 20:02:26.392312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.035 [2024-07-24 20:02:26.392998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.035 [2024-07-24 20:02:26.393041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.035 [2024-07-24 20:02:26.393078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.035 [2024-07-24 20:02:26.393594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.035 [2024-07-24 20:02:26.393766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.035 [2024-07-24 20:02:26.393773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.035 [2024-07-24 20:02:26.393780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.035 [2024-07-24 20:02:26.396405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.035 [2024-07-24 20:02:26.405164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.035 [2024-07-24 20:02:26.405769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.036 [2024-07-24 20:02:26.405811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.036 [2024-07-24 20:02:26.405832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.036 [2024-07-24 20:02:26.406425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.036 [2024-07-24 20:02:26.406830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.036 [2024-07-24 20:02:26.406838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.036 [2024-07-24 20:02:26.406844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.036 [2024-07-24 20:02:26.409468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.036 [2024-07-24 20:02:26.417954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.036 [2024-07-24 20:02:26.418581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.036 [2024-07-24 20:02:26.418596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.036 [2024-07-24 20:02:26.418603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.036 [2024-07-24 20:02:26.418774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.036 [2024-07-24 20:02:26.418951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.036 [2024-07-24 20:02:26.418959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.036 [2024-07-24 20:02:26.418965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.036 [2024-07-24 20:02:26.421641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.036 [2024-07-24 20:02:26.430794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.036 [2024-07-24 20:02:26.431479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.036 [2024-07-24 20:02:26.431522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.036 [2024-07-24 20:02:26.431544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.036 [2024-07-24 20:02:26.431866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.036 [2024-07-24 20:02:26.432028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.036 [2024-07-24 20:02:26.432035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.036 [2024-07-24 20:02:26.432041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.036 [2024-07-24 20:02:26.434930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.036 [2024-07-24 20:02:26.443662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.036 [2024-07-24 20:02:26.444334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.036 [2024-07-24 20:02:26.444378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.036 [2024-07-24 20:02:26.444401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.036 [2024-07-24 20:02:26.444723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.036 [2024-07-24 20:02:26.444885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.036 [2024-07-24 20:02:26.444892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.036 [2024-07-24 20:02:26.444898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.036 [2024-07-24 20:02:26.447583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.036 [2024-07-24 20:02:26.456539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.036 [2024-07-24 20:02:26.457218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.036 [2024-07-24 20:02:26.457260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.036 [2024-07-24 20:02:26.457282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.036 [2024-07-24 20:02:26.457756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.036 [2024-07-24 20:02:26.457919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.036 [2024-07-24 20:02:26.457926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.036 [2024-07-24 20:02:26.457932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.036 [2024-07-24 20:02:26.460620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.036 [2024-07-24 20:02:26.469514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.036 [2024-07-24 20:02:26.470191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.036 [2024-07-24 20:02:26.470233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.036 [2024-07-24 20:02:26.470255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.036 [2024-07-24 20:02:26.470635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.036 [2024-07-24 20:02:26.470797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.036 [2024-07-24 20:02:26.470804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.036 [2024-07-24 20:02:26.470809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.036 [2024-07-24 20:02:26.473493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.036 [2024-07-24 20:02:26.482439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.036 [2024-07-24 20:02:26.483113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.036 [2024-07-24 20:02:26.483155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.036 [2024-07-24 20:02:26.483176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.036 [2024-07-24 20:02:26.483753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.036 [2024-07-24 20:02:26.483947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.036 [2024-07-24 20:02:26.483954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.036 [2024-07-24 20:02:26.483961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.036 [2024-07-24 20:02:26.486645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.036 [2024-07-24 20:02:26.495255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.036 [2024-07-24 20:02:26.495933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.036 [2024-07-24 20:02:26.495975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.036 [2024-07-24 20:02:26.495996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.036 [2024-07-24 20:02:26.496582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.036 [2024-07-24 20:02:26.496754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.036 [2024-07-24 20:02:26.496761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.036 [2024-07-24 20:02:26.496768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.036 [2024-07-24 20:02:26.499395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.036 [2024-07-24 20:02:26.508159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.036 [2024-07-24 20:02:26.508746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.036 [2024-07-24 20:02:26.508788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.036 [2024-07-24 20:02:26.508817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.036 [2024-07-24 20:02:26.509409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.036 [2024-07-24 20:02:26.509889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.036 [2024-07-24 20:02:26.509900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.036 [2024-07-24 20:02:26.509909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.036 [2024-07-24 20:02:26.513975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.036 [2024-07-24 20:02:26.521636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.036 [2024-07-24 20:02:26.522285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.036 [2024-07-24 20:02:26.522331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.036 [2024-07-24 20:02:26.522353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.036 [2024-07-24 20:02:26.522863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.036 [2024-07-24 20:02:26.523030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.036 [2024-07-24 20:02:26.523037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.036 [2024-07-24 20:02:26.523049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.036 [2024-07-24 20:02:26.525801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.037 [2024-07-24 20:02:26.534597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.037 [2024-07-24 20:02:26.535278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.037 [2024-07-24 20:02:26.535321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.037 [2024-07-24 20:02:26.535343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.037 [2024-07-24 20:02:26.535846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.037 [2024-07-24 20:02:26.536009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.037 [2024-07-24 20:02:26.536016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.037 [2024-07-24 20:02:26.536022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.037 [2024-07-24 20:02:26.538765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.037 [2024-07-24 20:02:26.547522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.037 [2024-07-24 20:02:26.548142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.037 [2024-07-24 20:02:26.548185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.037 [2024-07-24 20:02:26.548207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.037 [2024-07-24 20:02:26.548780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.037 [2024-07-24 20:02:26.548956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.037 [2024-07-24 20:02:26.548967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.037 [2024-07-24 20:02:26.548973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.037 [2024-07-24 20:02:26.551807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.037 [2024-07-24 20:02:26.560653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.037 [2024-07-24 20:02:26.561251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.037 [2024-07-24 20:02:26.561267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.037 [2024-07-24 20:02:26.561274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.037 [2024-07-24 20:02:26.561451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.037 [2024-07-24 20:02:26.561628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.037 [2024-07-24 20:02:26.561636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.037 [2024-07-24 20:02:26.561642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.037 [2024-07-24 20:02:26.564464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.037 [2024-07-24 20:02:26.573806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.037 [2024-07-24 20:02:26.574408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.037 [2024-07-24 20:02:26.574425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.037 [2024-07-24 20:02:26.574432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.037 [2024-07-24 20:02:26.574609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.037 [2024-07-24 20:02:26.574786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.037 [2024-07-24 20:02:26.574794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.037 [2024-07-24 20:02:26.574800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.037 [2024-07-24 20:02:26.577629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.037 [2024-07-24 20:02:26.586965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.037 [2024-07-24 20:02:26.587670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.037 [2024-07-24 20:02:26.587685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.037 [2024-07-24 20:02:26.587693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.037 [2024-07-24 20:02:26.587869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.037 [2024-07-24 20:02:26.588051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.037 [2024-07-24 20:02:26.588059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.037 [2024-07-24 20:02:26.588066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.037 [2024-07-24 20:02:26.590918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.037 [2024-07-24 20:02:26.600026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.037 [2024-07-24 20:02:26.600708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.037 [2024-07-24 20:02:26.600726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.037 [2024-07-24 20:02:26.600733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.037 [2024-07-24 20:02:26.600909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.037 [2024-07-24 20:02:26.601090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.037 [2024-07-24 20:02:26.601098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.037 [2024-07-24 20:02:26.601105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.037 [2024-07-24 20:02:26.603924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.037 [2024-07-24 20:02:26.613085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.037 [2024-07-24 20:02:26.613768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.037 [2024-07-24 20:02:26.613784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.037 [2024-07-24 20:02:26.613791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.037 [2024-07-24 20:02:26.613972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.037 [2024-07-24 20:02:26.614158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.037 [2024-07-24 20:02:26.614167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.037 [2024-07-24 20:02:26.614173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.037 [2024-07-24 20:02:26.617008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.037 [2024-07-24 20:02:26.626195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.037 [2024-07-24 20:02:26.626883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.037 [2024-07-24 20:02:26.626900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.037 [2024-07-24 20:02:26.626907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.037 [2024-07-24 20:02:26.627093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.037 [2024-07-24 20:02:26.627284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.037 [2024-07-24 20:02:26.627292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.037 [2024-07-24 20:02:26.627298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.037 [2024-07-24 20:02:26.630129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.298 [2024-07-24 20:02:26.639304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.298 [2024-07-24 20:02:26.639965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.298 [2024-07-24 20:02:26.639981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.298 [2024-07-24 20:02:26.639988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.298 [2024-07-24 20:02:26.640172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.298 [2024-07-24 20:02:26.640349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.299 [2024-07-24 20:02:26.640357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.299 [2024-07-24 20:02:26.640364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.299 [2024-07-24 20:02:26.643190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.299 [2024-07-24 20:02:26.652367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.299 [2024-07-24 20:02:26.652990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.299 [2024-07-24 20:02:26.653005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.299 [2024-07-24 20:02:26.653012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.299 [2024-07-24 20:02:26.653194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.299 [2024-07-24 20:02:26.653371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.299 [2024-07-24 20:02:26.653379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.299 [2024-07-24 20:02:26.653385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.299 [2024-07-24 20:02:26.656212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.299 [2024-07-24 20:02:26.665533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.299 [2024-07-24 20:02:26.666189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.299 [2024-07-24 20:02:26.666206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.299 [2024-07-24 20:02:26.666213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.299 [2024-07-24 20:02:26.666390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.299 [2024-07-24 20:02:26.666567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.299 [2024-07-24 20:02:26.666574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.299 [2024-07-24 20:02:26.666581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.299 [2024-07-24 20:02:26.669467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.299 [2024-07-24 20:02:26.678635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.299 [2024-07-24 20:02:26.679290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.299 [2024-07-24 20:02:26.679306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.299 [2024-07-24 20:02:26.679313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.299 [2024-07-24 20:02:26.679490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.299 [2024-07-24 20:02:26.679666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.299 [2024-07-24 20:02:26.679674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.299 [2024-07-24 20:02:26.679683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.299 [2024-07-24 20:02:26.682510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.299 [2024-07-24 20:02:26.691679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.299 [2024-07-24 20:02:26.692337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.299 [2024-07-24 20:02:26.692352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.299 [2024-07-24 20:02:26.692359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.299 [2024-07-24 20:02:26.692535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.299 [2024-07-24 20:02:26.692712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.299 [2024-07-24 20:02:26.692720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.299 [2024-07-24 20:02:26.692726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.299 [2024-07-24 20:02:26.695554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.299 [2024-07-24 20:02:26.704900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.299 [2024-07-24 20:02:26.705590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.299 [2024-07-24 20:02:26.705606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.299 [2024-07-24 20:02:26.705613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.299 [2024-07-24 20:02:26.705790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.299 [2024-07-24 20:02:26.705966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.299 [2024-07-24 20:02:26.705974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.299 [2024-07-24 20:02:26.705980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.299 [2024-07-24 20:02:26.708881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.299 [2024-07-24 20:02:26.718101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.299 [2024-07-24 20:02:26.718771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.299 [2024-07-24 20:02:26.718787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.299 [2024-07-24 20:02:26.718794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.299 [2024-07-24 20:02:26.718970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.299 [2024-07-24 20:02:26.719153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.299 [2024-07-24 20:02:26.719161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.299 [2024-07-24 20:02:26.719168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.299 [2024-07-24 20:02:26.722002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.299 [2024-07-24 20:02:26.731171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.299 [2024-07-24 20:02:26.731837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.299 [2024-07-24 20:02:26.731852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.299 [2024-07-24 20:02:26.731859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.299 [2024-07-24 20:02:26.732034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.299 [2024-07-24 20:02:26.732218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.299 [2024-07-24 20:02:26.732226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.299 [2024-07-24 20:02:26.732232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.299 [2024-07-24 20:02:26.735056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.299 [2024-07-24 20:02:26.744220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.299 [2024-07-24 20:02:26.744877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.299 [2024-07-24 20:02:26.744892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.299 [2024-07-24 20:02:26.744899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.299 [2024-07-24 20:02:26.745085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.299 [2024-07-24 20:02:26.745262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.299 [2024-07-24 20:02:26.745270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.299 [2024-07-24 20:02:26.745277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.299 [2024-07-24 20:02:26.748152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.299 [2024-07-24 20:02:26.757303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.299 [2024-07-24 20:02:26.757972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.299 [2024-07-24 20:02:26.757988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.299 [2024-07-24 20:02:26.757995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.299 [2024-07-24 20:02:26.758176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.299 [2024-07-24 20:02:26.758352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.299 [2024-07-24 20:02:26.758360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.299 [2024-07-24 20:02:26.758366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.299 [2024-07-24 20:02:26.761188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.299 [2024-07-24 20:02:26.770355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.299 [2024-07-24 20:02:26.771031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.300 [2024-07-24 20:02:26.771052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.300 [2024-07-24 20:02:26.771060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.300 [2024-07-24 20:02:26.771236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.300 [2024-07-24 20:02:26.771415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.300 [2024-07-24 20:02:26.771423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.300 [2024-07-24 20:02:26.771430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.300 [2024-07-24 20:02:26.774253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.300 [2024-07-24 20:02:26.783419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.300 [2024-07-24 20:02:26.784079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.300 [2024-07-24 20:02:26.784096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.300 [2024-07-24 20:02:26.784103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.300 [2024-07-24 20:02:26.784279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.300 [2024-07-24 20:02:26.784455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.300 [2024-07-24 20:02:26.784463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.300 [2024-07-24 20:02:26.784470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.300 [2024-07-24 20:02:26.787295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.300 [2024-07-24 20:02:26.796451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.300 [2024-07-24 20:02:26.797087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.300 [2024-07-24 20:02:26.797103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.300 [2024-07-24 20:02:26.797110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.300 [2024-07-24 20:02:26.797286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.300 [2024-07-24 20:02:26.797463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.300 [2024-07-24 20:02:26.797471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.300 [2024-07-24 20:02:26.797477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.300 [2024-07-24 20:02:26.800299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.300 [2024-07-24 20:02:26.809628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.300 [2024-07-24 20:02:26.810314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.300 [2024-07-24 20:02:26.810356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.300 [2024-07-24 20:02:26.810377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.300 [2024-07-24 20:02:26.810954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.300 [2024-07-24 20:02:26.811556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.300 [2024-07-24 20:02:26.811565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.300 [2024-07-24 20:02:26.811571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.300 [2024-07-24 20:02:26.814397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.300 [2024-07-24 20:02:26.822739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.300 [2024-07-24 20:02:26.823405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.300 [2024-07-24 20:02:26.823440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.300 [2024-07-24 20:02:26.823462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.300 [2024-07-24 20:02:26.824040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.300 [2024-07-24 20:02:26.824630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.300 [2024-07-24 20:02:26.824654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.300 [2024-07-24 20:02:26.824682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.300 [2024-07-24 20:02:26.827426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.300 [2024-07-24 20:02:26.835720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.300 [2024-07-24 20:02:26.836332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.300 [2024-07-24 20:02:26.836379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.300 [2024-07-24 20:02:26.836401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.300 [2024-07-24 20:02:26.836977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.300 [2024-07-24 20:02:26.837571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.300 [2024-07-24 20:02:26.837596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.300 [2024-07-24 20:02:26.837616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.300 [2024-07-24 20:02:26.840435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.300 [2024-07-24 20:02:26.848654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.300 [2024-07-24 20:02:26.849323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.300 [2024-07-24 20:02:26.849366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.300 [2024-07-24 20:02:26.849387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.300 [2024-07-24 20:02:26.849904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.300 [2024-07-24 20:02:26.850074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.300 [2024-07-24 20:02:26.850082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.300 [2024-07-24 20:02:26.850088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.300 [2024-07-24 20:02:26.852731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.300 [2024-07-24 20:02:26.861520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.300 [2024-07-24 20:02:26.862330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.300 [2024-07-24 20:02:26.862375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.300 [2024-07-24 20:02:26.862385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.300 [2024-07-24 20:02:26.862546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.300 [2024-07-24 20:02:26.862708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.300 [2024-07-24 20:02:26.862716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.300 [2024-07-24 20:02:26.862722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.300 [2024-07-24 20:02:26.865337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.300 [2024-07-24 20:02:26.874411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.300 [2024-07-24 20:02:26.875063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.300 [2024-07-24 20:02:26.875078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.300 [2024-07-24 20:02:26.875085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.300 [2024-07-24 20:02:26.875246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.300 [2024-07-24 20:02:26.875408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.300 [2024-07-24 20:02:26.875415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.300 [2024-07-24 20:02:26.875420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.300 [2024-07-24 20:02:26.878108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.300 [2024-07-24 20:02:26.887330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.300 [2024-07-24 20:02:26.887988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.300 [2024-07-24 20:02:26.888031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.300 [2024-07-24 20:02:26.888065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.300 [2024-07-24 20:02:26.888499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.300 [2024-07-24 20:02:26.888676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.300 [2024-07-24 20:02:26.888684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.300 [2024-07-24 20:02:26.888690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.301 [2024-07-24 20:02:26.891455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.561 [2024-07-24 20:02:26.900397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.561 [2024-07-24 20:02:26.901019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.561 [2024-07-24 20:02:26.901071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.561 [2024-07-24 20:02:26.901094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.561 [2024-07-24 20:02:26.901471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.561 [2024-07-24 20:02:26.901645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.561 [2024-07-24 20:02:26.901656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.561 [2024-07-24 20:02:26.901661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.561 [2024-07-24 20:02:26.904348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.561 [2024-07-24 20:02:26.913256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.561 [2024-07-24 20:02:26.913823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.561 [2024-07-24 20:02:26.913838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.561 [2024-07-24 20:02:26.913845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.561 [2024-07-24 20:02:26.914006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.561 [2024-07-24 20:02:26.914197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.561 [2024-07-24 20:02:26.914205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.561 [2024-07-24 20:02:26.914211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.561 [2024-07-24 20:02:26.916918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.561 [2024-07-24 20:02:26.926236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.561 [2024-07-24 20:02:26.926804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.561 [2024-07-24 20:02:26.926819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.561 [2024-07-24 20:02:26.926825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.561 [2024-07-24 20:02:26.926987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.561 [2024-07-24 20:02:26.927174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.561 [2024-07-24 20:02:26.927182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.561 [2024-07-24 20:02:26.927188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.561 [2024-07-24 20:02:26.929867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.561 [2024-07-24 20:02:26.939138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.561 [2024-07-24 20:02:26.939814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.561 [2024-07-24 20:02:26.939857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.561 [2024-07-24 20:02:26.939880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.561 [2024-07-24 20:02:26.940471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.561 [2024-07-24 20:02:26.940869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.561 [2024-07-24 20:02:26.940876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.561 [2024-07-24 20:02:26.940883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.561 [2024-07-24 20:02:26.943557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.561 [2024-07-24 20:02:26.952140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.561 [2024-07-24 20:02:26.952764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.561 [2024-07-24 20:02:26.952806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.561 [2024-07-24 20:02:26.952827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.561 [2024-07-24 20:02:26.953179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.561 [2024-07-24 20:02:26.953341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.561 [2024-07-24 20:02:26.953349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.561 [2024-07-24 20:02:26.953355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.561 [2024-07-24 20:02:26.955970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.561 [2024-07-24 20:02:26.965114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.561 [2024-07-24 20:02:26.965771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.561 [2024-07-24 20:02:26.965785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.561 [2024-07-24 20:02:26.965792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.561 [2024-07-24 20:02:26.965953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.561 [2024-07-24 20:02:26.966119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.561 [2024-07-24 20:02:26.966127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.561 [2024-07-24 20:02:26.966133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.561 [2024-07-24 20:02:26.968823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.561 [2024-07-24 20:02:26.978052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.561 [2024-07-24 20:02:26.978717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.561 [2024-07-24 20:02:26.978759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.561 [2024-07-24 20:02:26.978781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.562 [2024-07-24 20:02:26.979171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.562 [2024-07-24 20:02:26.979342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.562 [2024-07-24 20:02:26.979350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.562 [2024-07-24 20:02:26.979356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.562 [2024-07-24 20:02:26.982023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.562 [2024-07-24 20:02:26.990926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.562 [2024-07-24 20:02:26.991595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.562 [2024-07-24 20:02:26.991639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.562 [2024-07-24 20:02:26.991661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.562 [2024-07-24 20:02:26.992037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.562 [2024-07-24 20:02:26.992213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.562 [2024-07-24 20:02:26.992222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.562 [2024-07-24 20:02:26.992228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.562 [2024-07-24 20:02:26.994880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.562 [2024-07-24 20:02:27.004062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.562 [2024-07-24 20:02:27.004719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.562 [2024-07-24 20:02:27.004760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.562 [2024-07-24 20:02:27.004782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.562 [2024-07-24 20:02:27.005375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.562 [2024-07-24 20:02:27.005957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.562 [2024-07-24 20:02:27.005988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.562 [2024-07-24 20:02:27.005994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.562 [2024-07-24 20:02:27.008765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.562 [2024-07-24 20:02:27.016990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.562 [2024-07-24 20:02:27.017633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.562 [2024-07-24 20:02:27.017649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.562 [2024-07-24 20:02:27.017656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.562 [2024-07-24 20:02:27.017827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.562 [2024-07-24 20:02:27.017998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.562 [2024-07-24 20:02:27.018006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.562 [2024-07-24 20:02:27.018012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.562 [2024-07-24 20:02:27.020741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.562 [2024-07-24 20:02:27.030007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.562 [2024-07-24 20:02:27.030700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.562 [2024-07-24 20:02:27.030742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.562 [2024-07-24 20:02:27.030763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.562 [2024-07-24 20:02:27.031116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.562 [2024-07-24 20:02:27.031288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.562 [2024-07-24 20:02:27.031296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.562 [2024-07-24 20:02:27.031306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.562 [2024-07-24 20:02:27.034008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.562 [2024-07-24 20:02:27.042966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.562 [2024-07-24 20:02:27.043626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.562 [2024-07-24 20:02:27.043667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.562 [2024-07-24 20:02:27.043689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.562 [2024-07-24 20:02:27.044293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.562 [2024-07-24 20:02:27.044466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.562 [2024-07-24 20:02:27.044473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.562 [2024-07-24 20:02:27.044480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.562 [2024-07-24 20:02:27.047224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.562 [2024-07-24 20:02:27.055975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.562 [2024-07-24 20:02:27.056649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.562 [2024-07-24 20:02:27.056665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.562 [2024-07-24 20:02:27.056672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.562 [2024-07-24 20:02:27.056848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.562 [2024-07-24 20:02:27.057024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.562 [2024-07-24 20:02:27.057032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.562 [2024-07-24 20:02:27.057038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.562 [2024-07-24 20:02:27.059875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.562 [2024-07-24 20:02:27.069003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.562 [2024-07-24 20:02:27.069690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.562 [2024-07-24 20:02:27.069733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.562 [2024-07-24 20:02:27.069755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.562 [2024-07-24 20:02:27.070080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.562 [2024-07-24 20:02:27.070257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.562 [2024-07-24 20:02:27.070265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.562 [2024-07-24 20:02:27.070272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.562 [2024-07-24 20:02:27.073047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.562 [2024-07-24 20:02:27.081950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.562 [2024-07-24 20:02:27.082642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.562 [2024-07-24 20:02:27.082684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.562 [2024-07-24 20:02:27.082706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.562 [2024-07-24 20:02:27.083299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.562 [2024-07-24 20:02:27.083497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.562 [2024-07-24 20:02:27.083505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.562 [2024-07-24 20:02:27.083511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.562 [2024-07-24 20:02:27.086202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.562 [2024-07-24 20:02:27.094777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.562 [2024-07-24 20:02:27.095449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.562 [2024-07-24 20:02:27.095492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.562 [2024-07-24 20:02:27.095513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.562 [2024-07-24 20:02:27.096089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.562 [2024-07-24 20:02:27.096298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.562 [2024-07-24 20:02:27.096309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.562 [2024-07-24 20:02:27.096318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.562 [2024-07-24 20:02:27.100373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.562 [2024-07-24 20:02:27.108222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.562 [2024-07-24 20:02:27.108869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.563 [2024-07-24 20:02:27.108884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.563 [2024-07-24 20:02:27.108891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.563 [2024-07-24 20:02:27.109062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.563 [2024-07-24 20:02:27.109228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.563 [2024-07-24 20:02:27.109236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.563 [2024-07-24 20:02:27.109242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.563 [2024-07-24 20:02:27.111936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.563 [2024-07-24 20:02:27.121306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.563 [2024-07-24 20:02:27.121903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.563 [2024-07-24 20:02:27.121946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.563 [2024-07-24 20:02:27.121969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.563 [2024-07-24 20:02:27.122457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.563 [2024-07-24 20:02:27.122633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.563 [2024-07-24 20:02:27.122640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.563 [2024-07-24 20:02:27.122646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.563 [2024-07-24 20:02:27.125380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.563 [2024-07-24 20:02:27.134087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.563 [2024-07-24 20:02:27.134718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.563 [2024-07-24 20:02:27.134733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.563 [2024-07-24 20:02:27.134740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.563 [2024-07-24 20:02:27.134901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.563 [2024-07-24 20:02:27.135069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.563 [2024-07-24 20:02:27.135076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.563 [2024-07-24 20:02:27.135082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.563 [2024-07-24 20:02:27.137686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.563 [2024-07-24 20:02:27.147110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.563 [2024-07-24 20:02:27.147758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.563 [2024-07-24 20:02:27.147773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.563 [2024-07-24 20:02:27.147779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.563 [2024-07-24 20:02:27.147941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.563 [2024-07-24 20:02:27.148108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.563 [2024-07-24 20:02:27.148116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.563 [2024-07-24 20:02:27.148122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.563 [2024-07-24 20:02:27.150798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.825 [2024-07-24 20:02:27.160134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.825 [2024-07-24 20:02:27.160758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.825 [2024-07-24 20:02:27.160773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.825 [2024-07-24 20:02:27.160780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.825 [2024-07-24 20:02:27.160952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.825 [2024-07-24 20:02:27.161146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.825 [2024-07-24 20:02:27.161155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.825 [2024-07-24 20:02:27.161161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.825 [2024-07-24 20:02:27.163982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.825 [2024-07-24 20:02:27.173096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.825 [2024-07-24 20:02:27.173698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.825 [2024-07-24 20:02:27.173740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.825 [2024-07-24 20:02:27.173762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.825 [2024-07-24 20:02:27.174351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.825 [2024-07-24 20:02:27.174653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.825 [2024-07-24 20:02:27.174660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.825 [2024-07-24 20:02:27.174666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.825 [2024-07-24 20:02:27.177282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.825 [2024-07-24 20:02:27.186041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.825 [2024-07-24 20:02:27.186694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.825 [2024-07-24 20:02:27.186736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.825 [2024-07-24 20:02:27.186758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.825 [2024-07-24 20:02:27.187348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.825 [2024-07-24 20:02:27.187681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.825 [2024-07-24 20:02:27.187689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.825 [2024-07-24 20:02:27.187695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.825 [2024-07-24 20:02:27.190320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.825 [2024-07-24 20:02:27.198903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.825 [2024-07-24 20:02:27.199565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.825 [2024-07-24 20:02:27.199608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.825 [2024-07-24 20:02:27.199629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.825 [2024-07-24 20:02:27.200219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.825 [2024-07-24 20:02:27.200721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.825 [2024-07-24 20:02:27.200729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.825 [2024-07-24 20:02:27.200735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.825 [2024-07-24 20:02:27.203356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.825 [2024-07-24 20:02:27.211797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.825 [2024-07-24 20:02:27.212436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.825 [2024-07-24 20:02:27.212451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.825 [2024-07-24 20:02:27.212460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.825 [2024-07-24 20:02:27.212622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.825 [2024-07-24 20:02:27.212782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.825 [2024-07-24 20:02:27.212790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.825 [2024-07-24 20:02:27.212795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.825 [2024-07-24 20:02:27.215463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.825 [2024-07-24 20:02:27.224689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.825 [2024-07-24 20:02:27.225322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.825 [2024-07-24 20:02:27.225365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.825 [2024-07-24 20:02:27.225386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.825 [2024-07-24 20:02:27.225797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.825 [2024-07-24 20:02:27.225959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.825 [2024-07-24 20:02:27.225966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.825 [2024-07-24 20:02:27.225972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.825 [2024-07-24 20:02:27.228640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.825 [2024-07-24 20:02:27.237488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.825 [2024-07-24 20:02:27.238134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.825 [2024-07-24 20:02:27.238177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.825 [2024-07-24 20:02:27.238198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.825 [2024-07-24 20:02:27.238597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.825 [2024-07-24 20:02:27.238759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.825 [2024-07-24 20:02:27.238766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.825 [2024-07-24 20:02:27.238772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.825 [2024-07-24 20:02:27.241452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.825 [2024-07-24 20:02:27.250293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.825 [2024-07-24 20:02:27.250956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.825 [2024-07-24 20:02:27.250971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.825 [2024-07-24 20:02:27.250977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.825 [2024-07-24 20:02:27.251165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.825 [2024-07-24 20:02:27.251336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.825 [2024-07-24 20:02:27.251347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.825 [2024-07-24 20:02:27.251353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.825 [2024-07-24 20:02:27.254008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.825 [2024-07-24 20:02:27.263073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.825 [2024-07-24 20:02:27.263728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.825 [2024-07-24 20:02:27.263743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.825 [2024-07-24 20:02:27.263749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.825 [2024-07-24 20:02:27.263910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.825 [2024-07-24 20:02:27.264095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.825 [2024-07-24 20:02:27.264103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.825 [2024-07-24 20:02:27.264110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.825 [2024-07-24 20:02:27.266781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.825 [2024-07-24 20:02:27.275994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.825 [2024-07-24 20:02:27.276671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.825 [2024-07-24 20:02:27.276714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.825 [2024-07-24 20:02:27.276735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.825 [2024-07-24 20:02:27.277326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.826 [2024-07-24 20:02:27.277757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.826 [2024-07-24 20:02:27.277765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.826 [2024-07-24 20:02:27.277771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.826 [2024-07-24 20:02:27.281718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.826 [2024-07-24 20:02:27.289564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.826 [2024-07-24 20:02:27.290194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.826 [2024-07-24 20:02:27.290237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.826 [2024-07-24 20:02:27.290258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.826 [2024-07-24 20:02:27.290736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.826 [2024-07-24 20:02:27.290902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.826 [2024-07-24 20:02:27.290910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.826 [2024-07-24 20:02:27.290915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.826 [2024-07-24 20:02:27.293637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.826 [2024-07-24 20:02:27.302377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.826 [2024-07-24 20:02:27.302956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.826 [2024-07-24 20:02:27.302998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.826 [2024-07-24 20:02:27.303019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.826 [2024-07-24 20:02:27.303489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.826 [2024-07-24 20:02:27.303662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.826 [2024-07-24 20:02:27.303669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.826 [2024-07-24 20:02:27.303675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.826 [2024-07-24 20:02:27.306355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.826 [2024-07-24 20:02:27.315506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.826 [2024-07-24 20:02:27.316158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.826 [2024-07-24 20:02:27.316173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.826 [2024-07-24 20:02:27.316180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.826 [2024-07-24 20:02:27.316356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.826 [2024-07-24 20:02:27.316542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.826 [2024-07-24 20:02:27.316550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.826 [2024-07-24 20:02:27.316556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.826 [2024-07-24 20:02:27.319358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.826 [2024-07-24 20:02:27.328396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.826 [2024-07-24 20:02:27.329056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.826 [2024-07-24 20:02:27.329098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.826 [2024-07-24 20:02:27.329120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.826 [2024-07-24 20:02:27.329521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.826 [2024-07-24 20:02:27.329683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.826 [2024-07-24 20:02:27.329690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.826 [2024-07-24 20:02:27.329696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.826 [2024-07-24 20:02:27.332378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.826 [2024-07-24 20:02:27.341174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.826 [2024-07-24 20:02:27.341819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.826 [2024-07-24 20:02:27.341860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.826 [2024-07-24 20:02:27.341881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.826 [2024-07-24 20:02:27.342481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.826 [2024-07-24 20:02:27.342892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.826 [2024-07-24 20:02:27.342900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.826 [2024-07-24 20:02:27.342906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.826 [2024-07-24 20:02:27.345539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.826 [2024-07-24 20:02:27.354004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.826 [2024-07-24 20:02:27.354661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.826 [2024-07-24 20:02:27.354705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.826 [2024-07-24 20:02:27.354726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.826 [2024-07-24 20:02:27.354997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.826 [2024-07-24 20:02:27.355184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.826 [2024-07-24 20:02:27.355192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.826 [2024-07-24 20:02:27.355198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.826 [2024-07-24 20:02:27.357853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.826 [2024-07-24 20:02:27.366851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.826 [2024-07-24 20:02:27.367475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.826 [2024-07-24 20:02:27.367491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.826 [2024-07-24 20:02:27.367497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.826 [2024-07-24 20:02:27.367658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.826 [2024-07-24 20:02:27.367821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.826 [2024-07-24 20:02:27.367828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.826 [2024-07-24 20:02:27.367834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.826 [2024-07-24 20:02:27.370560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.826 [2024-07-24 20:02:27.379666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.826 [2024-07-24 20:02:27.380311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.826 [2024-07-24 20:02:27.380349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.826 [2024-07-24 20:02:27.380372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.826 [2024-07-24 20:02:27.380950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.826 [2024-07-24 20:02:27.381136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.826 [2024-07-24 20:02:27.381144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.826 [2024-07-24 20:02:27.381154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.826 [2024-07-24 20:02:27.383811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.826 [2024-07-24 20:02:27.392555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.826 [2024-07-24 20:02:27.393232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.826 [2024-07-24 20:02:27.393275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.826 [2024-07-24 20:02:27.393298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.826 [2024-07-24 20:02:27.393875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.826 [2024-07-24 20:02:27.394096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.826 [2024-07-24 20:02:27.394105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.826 [2024-07-24 20:02:27.394111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.826 [2024-07-24 20:02:27.396861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.826 [2024-07-24 20:02:27.405588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.827 [2024-07-24 20:02:27.406252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.827 [2024-07-24 20:02:27.406295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.827 [2024-07-24 20:02:27.406317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:35.827 [2024-07-24 20:02:27.406841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:35.827 [2024-07-24 20:02:27.407012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.827 [2024-07-24 20:02:27.407020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.827 [2024-07-24 20:02:27.407026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.827 [2024-07-24 20:02:27.409776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.827 [2024-07-24 20:02:27.418566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.827 [2024-07-24 20:02:27.419216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.827 [2024-07-24 20:02:27.419258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:35.827 [2024-07-24 20:02:27.419279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.087 [2024-07-24 20:02:27.419856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.087 [2024-07-24 20:02:27.420436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.087 [2024-07-24 20:02:27.420444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.087 [2024-07-24 20:02:27.420451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.087 [2024-07-24 20:02:27.423167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.087 [2024-07-24 20:02:27.431488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.087 [2024-07-24 20:02:27.432144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.087 [2024-07-24 20:02:27.432186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.087 [2024-07-24 20:02:27.432207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.087 [2024-07-24 20:02:27.432783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.087 [2024-07-24 20:02:27.433258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.087 [2024-07-24 20:02:27.433266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.087 [2024-07-24 20:02:27.433272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.087 [2024-07-24 20:02:27.435962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.087 [2024-07-24 20:02:27.444306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.087 [2024-07-24 20:02:27.444960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.087 [2024-07-24 20:02:27.445001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.087 [2024-07-24 20:02:27.445023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.087 [2024-07-24 20:02:27.445617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.087 [2024-07-24 20:02:27.446066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.087 [2024-07-24 20:02:27.446074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.087 [2024-07-24 20:02:27.446081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.087 [2024-07-24 20:02:27.448740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.087 [2024-07-24 20:02:27.457230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.088 [2024-07-24 20:02:27.457877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.088 [2024-07-24 20:02:27.457920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.088 [2024-07-24 20:02:27.457942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.088 [2024-07-24 20:02:27.458364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.088 [2024-07-24 20:02:27.458535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.088 [2024-07-24 20:02:27.458543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.088 [2024-07-24 20:02:27.458549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.088 [2024-07-24 20:02:27.461190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.088 [2024-07-24 20:02:27.470082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.088 [2024-07-24 20:02:27.470768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.088 [2024-07-24 20:02:27.470811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.088 [2024-07-24 20:02:27.470832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.088 [2024-07-24 20:02:27.471237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.088 [2024-07-24 20:02:27.471409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.088 [2024-07-24 20:02:27.471416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.088 [2024-07-24 20:02:27.471422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.088 [2024-07-24 20:02:27.474067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.088 [2024-07-24 20:02:27.482862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.088 [2024-07-24 20:02:27.483519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.088 [2024-07-24 20:02:27.483562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.088 [2024-07-24 20:02:27.483584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.088 [2024-07-24 20:02:27.484013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.088 [2024-07-24 20:02:27.484203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.088 [2024-07-24 20:02:27.484211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.088 [2024-07-24 20:02:27.484217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.088 [2024-07-24 20:02:27.486874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.088 [2024-07-24 20:02:27.495666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.088 [2024-07-24 20:02:27.496326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.088 [2024-07-24 20:02:27.496369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.088 [2024-07-24 20:02:27.496390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.088 [2024-07-24 20:02:27.496913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.088 [2024-07-24 20:02:27.497096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.088 [2024-07-24 20:02:27.497104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.088 [2024-07-24 20:02:27.497110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.088 [2024-07-24 20:02:27.499767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2199713 Killed "${NVMF_APP[@]}" "$@" 00:26:36.088 20:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:36.088 20:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:36.088 20:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:36.088 20:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:36.088 20:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.088 [2024-07-24 20:02:27.508834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.088 [2024-07-24 20:02:27.509496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.088 [2024-07-24 20:02:27.509512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.088 [2024-07-24 20:02:27.509518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.088 [2024-07-24 20:02:27.509697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.088 [2024-07-24 20:02:27.509873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.088 [2024-07-24 20:02:27.509881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.088 [2024-07-24 20:02:27.509887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.088 [2024-07-24 20:02:27.512738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.088 20:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2201123 00:26:36.088 20:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2201123 00:26:36.088 20:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:36.088 20:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2201123 ']' 00:26:36.088 20:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.088 20:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:36.088 20:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.088 20:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:36.088 20:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.088 [2024-07-24 20:02:27.521918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.088 [2024-07-24 20:02:27.522620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.088 [2024-07-24 20:02:27.522663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.088 [2024-07-24 20:02:27.522685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.088 [2024-07-24 20:02:27.523122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.088 [2024-07-24 20:02:27.523299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.088 [2024-07-24 20:02:27.523307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.088 [2024-07-24 20:02:27.523314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.088 [2024-07-24 20:02:27.526100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.088 [2024-07-24 20:02:27.534959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.088 [2024-07-24 20:02:27.535641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.088 [2024-07-24 20:02:27.535684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.088 [2024-07-24 20:02:27.535707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.088 [2024-07-24 20:02:27.536075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.088 [2024-07-24 20:02:27.536253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.088 [2024-07-24 20:02:27.536261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.088 [2024-07-24 20:02:27.536270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.088 [2024-07-24 20:02:27.539024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.088 [2024-07-24 20:02:27.547919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.088 [2024-07-24 20:02:27.548573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.088 [2024-07-24 20:02:27.548615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.088 [2024-07-24 20:02:27.548638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.088 [2024-07-24 20:02:27.549230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.088 [2024-07-24 20:02:27.549553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.088 [2024-07-24 20:02:27.549561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.088 [2024-07-24 20:02:27.549566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.088 [2024-07-24 20:02:27.552255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.088 [2024-07-24 20:02:27.560874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.088 [2024-07-24 20:02:27.561415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.088 [2024-07-24 20:02:27.561431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.089 [2024-07-24 20:02:27.561438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.089 [2024-07-24 20:02:27.561609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.089 [2024-07-24 20:02:27.561780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.089 [2024-07-24 20:02:27.561788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.089 [2024-07-24 20:02:27.561794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.089 [2024-07-24 20:02:27.563071] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:26:36.089 [2024-07-24 20:02:27.563119] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.089 [2024-07-24 20:02:27.564626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.089 [2024-07-24 20:02:27.573871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.089 [2024-07-24 20:02:27.574565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.089 [2024-07-24 20:02:27.574609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.089 [2024-07-24 20:02:27.574630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.089 [2024-07-24 20:02:27.575115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.089 [2024-07-24 20:02:27.575287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.089 [2024-07-24 20:02:27.575295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.089 [2024-07-24 20:02:27.575301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.089 [2024-07-24 20:02:27.578046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.089 [2024-07-24 20:02:27.586937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.089 [2024-07-24 20:02:27.587626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.089 [2024-07-24 20:02:27.587669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.089 [2024-07-24 20:02:27.587690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.089 [2024-07-24 20:02:27.588184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.089 [2024-07-24 20:02:27.588356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.089 [2024-07-24 20:02:27.588364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.089 [2024-07-24 20:02:27.588370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.089 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.089 [2024-07-24 20:02:27.591010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.089 [2024-07-24 20:02:27.599949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.089 [2024-07-24 20:02:27.600557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.089 [2024-07-24 20:02:27.600573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.089 [2024-07-24 20:02:27.600580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.089 [2024-07-24 20:02:27.600752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.089 [2024-07-24 20:02:27.600923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.089 [2024-07-24 20:02:27.600930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.089 [2024-07-24 20:02:27.600936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.089 [2024-07-24 20:02:27.603681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.089 [2024-07-24 20:02:27.612945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.089 [2024-07-24 20:02:27.613641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.089 [2024-07-24 20:02:27.613656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.089 [2024-07-24 20:02:27.613664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.089 [2024-07-24 20:02:27.613835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.089 [2024-07-24 20:02:27.614006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.089 [2024-07-24 20:02:27.614014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.089 [2024-07-24 20:02:27.614020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.089 [2024-07-24 20:02:27.616761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.089 [2024-07-24 20:02:27.622214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:36.089 [2024-07-24 20:02:27.625992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.089 [2024-07-24 20:02:27.626685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.089 [2024-07-24 20:02:27.626700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.089 [2024-07-24 20:02:27.626707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.089 [2024-07-24 20:02:27.626880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.089 [2024-07-24 20:02:27.627056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.089 [2024-07-24 20:02:27.627064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.089 [2024-07-24 20:02:27.627071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.089 [2024-07-24 20:02:27.629873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.089 [2024-07-24 20:02:27.639093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.089 [2024-07-24 20:02:27.639755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.089 [2024-07-24 20:02:27.639771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.089 [2024-07-24 20:02:27.639779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.089 [2024-07-24 20:02:27.639952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.089 [2024-07-24 20:02:27.640130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.089 [2024-07-24 20:02:27.640139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.089 [2024-07-24 20:02:27.640145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.089 [2024-07-24 20:02:27.642883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.089 [2024-07-24 20:02:27.652120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.089 [2024-07-24 20:02:27.652797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.089 [2024-07-24 20:02:27.652812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.089 [2024-07-24 20:02:27.652819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.089 [2024-07-24 20:02:27.652991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.089 [2024-07-24 20:02:27.653169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.089 [2024-07-24 20:02:27.653178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.089 [2024-07-24 20:02:27.653184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.089 [2024-07-24 20:02:27.655928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.089 [2024-07-24 20:02:27.665160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.089 [2024-07-24 20:02:27.665844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.089 [2024-07-24 20:02:27.665865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.089 [2024-07-24 20:02:27.665873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.089 [2024-07-24 20:02:27.666052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.089 [2024-07-24 20:02:27.666232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.089 [2024-07-24 20:02:27.666242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.089 [2024-07-24 20:02:27.666249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.089 [2024-07-24 20:02:27.668988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.089 [2024-07-24 20:02:27.678225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.089 [2024-07-24 20:02:27.678882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.089 [2024-07-24 20:02:27.678898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.089 [2024-07-24 20:02:27.678906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.089 [2024-07-24 20:02:27.679087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.089 [2024-07-24 20:02:27.679266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.090 [2024-07-24 20:02:27.679274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.090 [2024-07-24 20:02:27.679281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.090 [2024-07-24 20:02:27.682116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.350 [2024-07-24 20:02:27.691396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.350 [2024-07-24 20:02:27.692077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.350 [2024-07-24 20:02:27.692093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.350 [2024-07-24 20:02:27.692101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.350 [2024-07-24 20:02:27.692272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.350 [2024-07-24 20:02:27.692443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.350 [2024-07-24 20:02:27.692450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.350 [2024-07-24 20:02:27.692457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.350 [2024-07-24 20:02:27.695198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.350 [2024-07-24 20:02:27.698667] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.350 [2024-07-24 20:02:27.698698] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.350 [2024-07-24 20:02:27.698706] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.350 [2024-07-24 20:02:27.698712] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.350 [2024-07-24 20:02:27.698716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.350 [2024-07-24 20:02:27.698805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.350 [2024-07-24 20:02:27.698876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.350 [2024-07-24 20:02:27.698940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.350 [2024-07-24 20:02:27.704433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.350 [2024-07-24 20:02:27.705127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.350 [2024-07-24 20:02:27.705152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.350 [2024-07-24 20:02:27.705160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.350 [2024-07-24 20:02:27.705338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.350 [2024-07-24 20:02:27.705517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.350 [2024-07-24 20:02:27.705525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.350 [2024-07-24 20:02:27.705532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.350 [2024-07-24 20:02:27.708357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.351 [2024-07-24 20:02:27.717524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.351 [2024-07-24 20:02:27.718244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.351 [2024-07-24 20:02:27.718264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.351 [2024-07-24 20:02:27.718273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.351 [2024-07-24 20:02:27.718450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.351 [2024-07-24 20:02:27.718628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.351 [2024-07-24 20:02:27.718636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.351 [2024-07-24 20:02:27.718644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.351 [2024-07-24 20:02:27.721470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.351 [2024-07-24 20:02:27.730645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.351 [2024-07-24 20:02:27.731324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.351 [2024-07-24 20:02:27.731342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.351 [2024-07-24 20:02:27.731349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.351 [2024-07-24 20:02:27.731527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.351 [2024-07-24 20:02:27.731706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.351 [2024-07-24 20:02:27.731714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.351 [2024-07-24 20:02:27.731721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.351 [2024-07-24 20:02:27.734545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.351 [2024-07-24 20:02:27.743704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.351 [2024-07-24 20:02:27.744415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.351 [2024-07-24 20:02:27.744434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.351 [2024-07-24 20:02:27.744443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.351 [2024-07-24 20:02:27.744620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.351 [2024-07-24 20:02:27.744803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.351 [2024-07-24 20:02:27.744812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.351 [2024-07-24 20:02:27.744819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.351 [2024-07-24 20:02:27.747659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.351 [2024-07-24 20:02:27.756841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.351 [2024-07-24 20:02:27.757539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.351 [2024-07-24 20:02:27.757558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.351 [2024-07-24 20:02:27.757566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.351 [2024-07-24 20:02:27.757743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.351 [2024-07-24 20:02:27.757921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.351 [2024-07-24 20:02:27.757929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.351 [2024-07-24 20:02:27.757936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.351 [2024-07-24 20:02:27.760793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.351 [2024-07-24 20:02:27.769966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.351 [2024-07-24 20:02:27.770631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.351 [2024-07-24 20:02:27.770649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.351 [2024-07-24 20:02:27.770656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.351 [2024-07-24 20:02:27.770834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.351 [2024-07-24 20:02:27.771010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.351 [2024-07-24 20:02:27.771019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.351 [2024-07-24 20:02:27.771025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.351 [2024-07-24 20:02:27.773847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.351 [2024-07-24 20:02:27.783007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.351 [2024-07-24 20:02:27.783693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.351 [2024-07-24 20:02:27.783710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.351 [2024-07-24 20:02:27.783718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.351 [2024-07-24 20:02:27.783895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.351 [2024-07-24 20:02:27.784076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.351 [2024-07-24 20:02:27.784085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.351 [2024-07-24 20:02:27.784092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.351 [2024-07-24 20:02:27.786908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.351 [2024-07-24 20:02:27.796066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.351 [2024-07-24 20:02:27.796749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.351 [2024-07-24 20:02:27.796765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.351 [2024-07-24 20:02:27.796771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.351 [2024-07-24 20:02:27.796948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.351 [2024-07-24 20:02:27.797129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.351 [2024-07-24 20:02:27.797138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.351 [2024-07-24 20:02:27.797145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.351 [2024-07-24 20:02:27.799963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.351 [2024-07-24 20:02:27.809124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.351 [2024-07-24 20:02:27.809715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.351 [2024-07-24 20:02:27.809731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.351 [2024-07-24 20:02:27.809738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.351 [2024-07-24 20:02:27.809914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.351 [2024-07-24 20:02:27.810096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.351 [2024-07-24 20:02:27.810105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.351 [2024-07-24 20:02:27.810112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.351 [2024-07-24 20:02:27.812932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.351 [2024-07-24 20:02:27.822259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.351 [2024-07-24 20:02:27.822851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.351 [2024-07-24 20:02:27.822867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.351 [2024-07-24 20:02:27.822874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.351 [2024-07-24 20:02:27.823054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.351 [2024-07-24 20:02:27.823231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.351 [2024-07-24 20:02:27.823240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.351 [2024-07-24 20:02:27.823246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.351 [2024-07-24 20:02:27.826065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.351 [2024-07-24 20:02:27.835390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.351 [2024-07-24 20:02:27.836068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.351 [2024-07-24 20:02:27.836085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.351 [2024-07-24 20:02:27.836095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.351 [2024-07-24 20:02:27.836271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.352 [2024-07-24 20:02:27.836448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.352 [2024-07-24 20:02:27.836456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.352 [2024-07-24 20:02:27.836462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.352 [2024-07-24 20:02:27.839287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.352 [2024-07-24 20:02:27.848497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.352 [2024-07-24 20:02:27.849105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.352 [2024-07-24 20:02:27.849122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.352 [2024-07-24 20:02:27.849128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.352 [2024-07-24 20:02:27.849305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.352 [2024-07-24 20:02:27.849481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.352 [2024-07-24 20:02:27.849489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.352 [2024-07-24 20:02:27.849496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.352 [2024-07-24 20:02:27.852320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.352 [2024-07-24 20:02:27.861658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.352 [2024-07-24 20:02:27.862343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.352 [2024-07-24 20:02:27.862359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.352 [2024-07-24 20:02:27.862366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.352 [2024-07-24 20:02:27.862542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.352 [2024-07-24 20:02:27.862719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.352 [2024-07-24 20:02:27.862728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.352 [2024-07-24 20:02:27.862734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.352 [2024-07-24 20:02:27.865581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.352 [2024-07-24 20:02:27.874726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.352 [2024-07-24 20:02:27.875414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.352 [2024-07-24 20:02:27.875431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.352 [2024-07-24 20:02:27.875438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.352 [2024-07-24 20:02:27.875613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.352 [2024-07-24 20:02:27.875791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.352 [2024-07-24 20:02:27.875802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.352 [2024-07-24 20:02:27.875808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.352 [2024-07-24 20:02:27.878630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.352 [2024-07-24 20:02:27.887783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.352 [2024-07-24 20:02:27.888468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.352 [2024-07-24 20:02:27.888484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.352 [2024-07-24 20:02:27.888491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.352 [2024-07-24 20:02:27.888667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.352 [2024-07-24 20:02:27.888844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.352 [2024-07-24 20:02:27.888852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.352 [2024-07-24 20:02:27.888858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.352 [2024-07-24 20:02:27.891677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.352 [2024-07-24 20:02:27.900842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.352 [2024-07-24 20:02:27.901444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.352 [2024-07-24 20:02:27.901460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.352 [2024-07-24 20:02:27.901468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.352 [2024-07-24 20:02:27.901644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.352 [2024-07-24 20:02:27.901821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.352 [2024-07-24 20:02:27.901830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.352 [2024-07-24 20:02:27.901837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.352 [2024-07-24 20:02:27.904663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.352 [2024-07-24 20:02:27.913989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.352 [2024-07-24 20:02:27.914468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.352 [2024-07-24 20:02:27.914484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.352 [2024-07-24 20:02:27.914491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.352 [2024-07-24 20:02:27.914668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.352 [2024-07-24 20:02:27.914845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.352 [2024-07-24 20:02:27.914853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.352 [2024-07-24 20:02:27.914860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.352 [2024-07-24 20:02:27.917682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.352 [2024-07-24 20:02:27.927179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.352 [2024-07-24 20:02:27.927863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.352 [2024-07-24 20:02:27.927879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.352 [2024-07-24 20:02:27.927886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.352 [2024-07-24 20:02:27.928067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.352 [2024-07-24 20:02:27.928244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.352 [2024-07-24 20:02:27.928253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.352 [2024-07-24 20:02:27.928259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.352 [2024-07-24 20:02:27.931078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.352 [2024-07-24 20:02:27.940239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.352 [2024-07-24 20:02:27.940823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.352 [2024-07-24 20:02:27.940839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.352 [2024-07-24 20:02:27.940845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.352 [2024-07-24 20:02:27.941022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.352 [2024-07-24 20:02:27.941203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.352 [2024-07-24 20:02:27.941211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.352 [2024-07-24 20:02:27.941218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.352 [2024-07-24 20:02:27.944032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.613 [2024-07-24 20:02:27.953376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.613 [2024-07-24 20:02:27.954048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.613 [2024-07-24 20:02:27.954065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.613 [2024-07-24 20:02:27.954072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.613 [2024-07-24 20:02:27.954249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.613 [2024-07-24 20:02:27.954432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.613 [2024-07-24 20:02:27.954441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.613 [2024-07-24 20:02:27.954447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.613 [2024-07-24 20:02:27.957273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.613 [2024-07-24 20:02:27.966445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.613 [2024-07-24 20:02:27.967059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.613 [2024-07-24 20:02:27.967075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.613 [2024-07-24 20:02:27.967082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.613 [2024-07-24 20:02:27.967262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.613 [2024-07-24 20:02:27.967439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.613 [2024-07-24 20:02:27.967448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.613 [2024-07-24 20:02:27.967455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.613 [2024-07-24 20:02:27.970304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.613 [2024-07-24 20:02:27.979479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.613 [2024-07-24 20:02:27.980139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.613 [2024-07-24 20:02:27.980156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.613 [2024-07-24 20:02:27.980163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.613 [2024-07-24 20:02:27.980339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.613 [2024-07-24 20:02:27.980516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.613 [2024-07-24 20:02:27.980525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.613 [2024-07-24 20:02:27.980531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.613 [2024-07-24 20:02:27.983358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.613 [2024-07-24 20:02:27.992519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.613 [2024-07-24 20:02:27.993205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.613 [2024-07-24 20:02:27.993222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.613 [2024-07-24 20:02:27.993230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.613 [2024-07-24 20:02:27.993407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.613 [2024-07-24 20:02:27.993584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.613 [2024-07-24 20:02:27.993593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.613 [2024-07-24 20:02:27.993600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.613 [2024-07-24 20:02:27.996426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.613 [2024-07-24 20:02:28.005595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.613 [2024-07-24 20:02:28.006276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.613 [2024-07-24 20:02:28.006293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.613 [2024-07-24 20:02:28.006300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.613 [2024-07-24 20:02:28.006476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.613 [2024-07-24 20:02:28.006654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.613 [2024-07-24 20:02:28.006662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.613 [2024-07-24 20:02:28.006671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.613 [2024-07-24 20:02:28.009498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.613 [2024-07-24 20:02:28.018685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.613 [2024-07-24 20:02:28.019337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.613 [2024-07-24 20:02:28.019353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.613 [2024-07-24 20:02:28.019360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.613 [2024-07-24 20:02:28.019536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.613 [2024-07-24 20:02:28.019713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.613 [2024-07-24 20:02:28.019721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.614 [2024-07-24 20:02:28.019727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.614 [2024-07-24 20:02:28.022552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.614 [2024-07-24 20:02:28.031716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.614 [2024-07-24 20:02:28.032243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.614 [2024-07-24 20:02:28.032260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.614 [2024-07-24 20:02:28.032267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.614 [2024-07-24 20:02:28.032443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.614 [2024-07-24 20:02:28.032620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.614 [2024-07-24 20:02:28.032628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.614 [2024-07-24 20:02:28.032635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.614 [2024-07-24 20:02:28.035462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.614 [2024-07-24 20:02:28.044765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.614 [2024-07-24 20:02:28.045597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.614 [2024-07-24 20:02:28.045613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.614 [2024-07-24 20:02:28.045620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.614 [2024-07-24 20:02:28.045796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.614 [2024-07-24 20:02:28.045973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.614 [2024-07-24 20:02:28.045981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.614 [2024-07-24 20:02:28.045987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.614 [2024-07-24 20:02:28.048820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.614 [2024-07-24 20:02:28.057835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.614 [2024-07-24 20:02:28.058431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.614 [2024-07-24 20:02:28.058446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.614 [2024-07-24 20:02:28.058453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.614 [2024-07-24 20:02:28.058630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.614 [2024-07-24 20:02:28.058807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.614 [2024-07-24 20:02:28.058816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.614 [2024-07-24 20:02:28.058822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.614 [2024-07-24 20:02:28.061650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.614 [2024-07-24 20:02:28.070990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.614 [2024-07-24 20:02:28.071616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.614 [2024-07-24 20:02:28.071632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.614 [2024-07-24 20:02:28.071639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.614 [2024-07-24 20:02:28.071815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.614 [2024-07-24 20:02:28.071991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.614 [2024-07-24 20:02:28.072000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.614 [2024-07-24 20:02:28.072006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.614 [2024-07-24 20:02:28.074833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.614 [2024-07-24 20:02:28.084169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.614 [2024-07-24 20:02:28.084752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.614 [2024-07-24 20:02:28.084768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.614 [2024-07-24 20:02:28.084775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.614 [2024-07-24 20:02:28.084951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.614 [2024-07-24 20:02:28.085132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.614 [2024-07-24 20:02:28.085142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.614 [2024-07-24 20:02:28.085149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.614 [2024-07-24 20:02:28.087966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.614 [2024-07-24 20:02:28.097307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.614 [2024-07-24 20:02:28.097907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.614 [2024-07-24 20:02:28.097923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.614 [2024-07-24 20:02:28.097931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.614 [2024-07-24 20:02:28.098113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.614 [2024-07-24 20:02:28.098293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.614 [2024-07-24 20:02:28.098302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.614 [2024-07-24 20:02:28.098308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.614 [2024-07-24 20:02:28.101128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.614 [2024-07-24 20:02:28.110466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.614 [2024-07-24 20:02:28.111106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.614 [2024-07-24 20:02:28.111122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.614 [2024-07-24 20:02:28.111129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.614 [2024-07-24 20:02:28.111306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.614 [2024-07-24 20:02:28.111483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.614 [2024-07-24 20:02:28.111492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.614 [2024-07-24 20:02:28.111499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.614 [2024-07-24 20:02:28.114326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.614 [2024-07-24 20:02:28.123497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.614 [2024-07-24 20:02:28.124033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.614 [2024-07-24 20:02:28.124054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.614 [2024-07-24 20:02:28.124061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.614 [2024-07-24 20:02:28.124237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.614 [2024-07-24 20:02:28.124414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.614 [2024-07-24 20:02:28.124423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.614 [2024-07-24 20:02:28.124430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.614 [2024-07-24 20:02:28.127251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.614 [2024-07-24 20:02:28.136597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.614 [2024-07-24 20:02:28.137149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.614 [2024-07-24 20:02:28.137166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.614 [2024-07-24 20:02:28.137173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.614 [2024-07-24 20:02:28.137349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.614 [2024-07-24 20:02:28.137526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.614 [2024-07-24 20:02:28.137535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.614 [2024-07-24 20:02:28.137542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.614 [2024-07-24 20:02:28.140373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.614 [2024-07-24 20:02:28.149727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.614 [2024-07-24 20:02:28.150273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.614 [2024-07-24 20:02:28.150290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.614 [2024-07-24 20:02:28.150297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.614 [2024-07-24 20:02:28.150474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.615 [2024-07-24 20:02:28.150650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.615 [2024-07-24 20:02:28.150659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.615 [2024-07-24 20:02:28.150665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.615 [2024-07-24 20:02:28.153493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.615 [2024-07-24 20:02:28.162836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.615 [2024-07-24 20:02:28.163432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.615 [2024-07-24 20:02:28.163449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.615 [2024-07-24 20:02:28.163455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.615 [2024-07-24 20:02:28.163632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.615 [2024-07-24 20:02:28.163809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.615 [2024-07-24 20:02:28.163817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.615 [2024-07-24 20:02:28.163823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.615 [2024-07-24 20:02:28.166644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.615 [2024-07-24 20:02:28.175980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.615 [2024-07-24 20:02:28.176581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.615 [2024-07-24 20:02:28.176596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.615 [2024-07-24 20:02:28.176603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.615 [2024-07-24 20:02:28.176780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.615 [2024-07-24 20:02:28.176956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.615 [2024-07-24 20:02:28.176964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.615 [2024-07-24 20:02:28.176971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.615 [2024-07-24 20:02:28.179816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.615 [2024-07-24 20:02:28.189159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.615 [2024-07-24 20:02:28.189779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.615 [2024-07-24 20:02:28.189796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.615 [2024-07-24 20:02:28.189806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.615 [2024-07-24 20:02:28.189983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.615 [2024-07-24 20:02:28.190165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.615 [2024-07-24 20:02:28.190174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.615 [2024-07-24 20:02:28.190181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.615 [2024-07-24 20:02:28.192997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.615 [2024-07-24 20:02:28.202335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.615 [2024-07-24 20:02:28.202836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.615 [2024-07-24 20:02:28.202851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.615 [2024-07-24 20:02:28.202858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.615 [2024-07-24 20:02:28.203035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.615 [2024-07-24 20:02:28.203219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.615 [2024-07-24 20:02:28.203228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.615 [2024-07-24 20:02:28.203234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.615 [2024-07-24 20:02:28.206058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.876 [2024-07-24 20:02:28.215394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.876 [2024-07-24 20:02:28.215976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.876 [2024-07-24 20:02:28.215992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.876 [2024-07-24 20:02:28.215999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.876 [2024-07-24 20:02:28.216181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.876 [2024-07-24 20:02:28.216359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.876 [2024-07-24 20:02:28.216367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.876 [2024-07-24 20:02:28.216374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.876 [2024-07-24 20:02:28.219208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.876 [2024-07-24 20:02:28.228511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.876 [2024-07-24 20:02:28.229071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.876 [2024-07-24 20:02:28.229087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.876 [2024-07-24 20:02:28.229094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.876 [2024-07-24 20:02:28.229270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.876 [2024-07-24 20:02:28.229447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.876 [2024-07-24 20:02:28.229459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.876 [2024-07-24 20:02:28.229466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.876 [2024-07-24 20:02:28.232293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.876 [2024-07-24 20:02:28.241638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.876 [2024-07-24 20:02:28.242275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.876 [2024-07-24 20:02:28.242291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.876 [2024-07-24 20:02:28.242299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.876 [2024-07-24 20:02:28.242476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.876 [2024-07-24 20:02:28.242654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.876 [2024-07-24 20:02:28.242663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.876 [2024-07-24 20:02:28.242669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.876 [2024-07-24 20:02:28.245493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.876 [2024-07-24 20:02:28.254673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.876 [2024-07-24 20:02:28.255282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.876 [2024-07-24 20:02:28.255298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.876 [2024-07-24 20:02:28.255306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.876 [2024-07-24 20:02:28.255483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.876 [2024-07-24 20:02:28.255665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.876 [2024-07-24 20:02:28.255674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.876 [2024-07-24 20:02:28.255681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.876 [2024-07-24 20:02:28.258504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.876 [2024-07-24 20:02:28.267846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.876 [2024-07-24 20:02:28.268440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.876 [2024-07-24 20:02:28.268456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.876 [2024-07-24 20:02:28.268462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.876 [2024-07-24 20:02:28.268638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.876 [2024-07-24 20:02:28.268816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.876 [2024-07-24 20:02:28.268825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.876 [2024-07-24 20:02:28.268832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.876 [2024-07-24 20:02:28.271678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.876 [2024-07-24 20:02:28.281018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.876 [2024-07-24 20:02:28.281622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.876 [2024-07-24 20:02:28.281638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.876 [2024-07-24 20:02:28.281645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.876 [2024-07-24 20:02:28.281821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.876 [2024-07-24 20:02:28.281998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.876 [2024-07-24 20:02:28.282006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.876 [2024-07-24 20:02:28.282013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.876 [2024-07-24 20:02:28.284836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.876 [2024-07-24 20:02:28.294167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.876 [2024-07-24 20:02:28.294755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.876 [2024-07-24 20:02:28.294771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.876 [2024-07-24 20:02:28.294778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.876 [2024-07-24 20:02:28.294955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.876 [2024-07-24 20:02:28.295137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.876 [2024-07-24 20:02:28.295147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.876 [2024-07-24 20:02:28.295153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.876 [2024-07-24 20:02:28.297965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.876 [2024-07-24 20:02:28.307320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.876 [2024-07-24 20:02:28.307858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.876 [2024-07-24 20:02:28.307874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.876 [2024-07-24 20:02:28.307880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.876 [2024-07-24 20:02:28.308061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.876 [2024-07-24 20:02:28.308239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.876 [2024-07-24 20:02:28.308248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.876 [2024-07-24 20:02:28.308254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.876 [2024-07-24 20:02:28.311083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.876 [2024-07-24 20:02:28.320418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.876 [2024-07-24 20:02:28.321004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.876 [2024-07-24 20:02:28.321020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.876 [2024-07-24 20:02:28.321027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.876 [2024-07-24 20:02:28.321210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.876 [2024-07-24 20:02:28.321387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.876 [2024-07-24 20:02:28.321395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.877 [2024-07-24 20:02:28.321401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.877 [2024-07-24 20:02:28.324224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.877 [2024-07-24 20:02:28.333550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.877 [2024-07-24 20:02:28.334162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.877 [2024-07-24 20:02:28.334179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.877 [2024-07-24 20:02:28.334187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.877 [2024-07-24 20:02:28.334363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.877 [2024-07-24 20:02:28.334540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.877 [2024-07-24 20:02:28.334549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.877 [2024-07-24 20:02:28.334555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.877 [2024-07-24 20:02:28.337382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.877 [2024-07-24 20:02:28.346729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.877 [2024-07-24 20:02:28.347272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.877 [2024-07-24 20:02:28.347288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.877 [2024-07-24 20:02:28.347295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.877 [2024-07-24 20:02:28.347472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.877 [2024-07-24 20:02:28.347648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.877 [2024-07-24 20:02:28.347657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.877 [2024-07-24 20:02:28.347663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.877 [2024-07-24 20:02:28.350486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.877 [2024-07-24 20:02:28.359832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.877 [2024-07-24 20:02:28.360495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.877 [2024-07-24 20:02:28.360512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.877 [2024-07-24 20:02:28.360519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.877 [2024-07-24 20:02:28.360695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.877 [2024-07-24 20:02:28.360872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.877 [2024-07-24 20:02:28.360880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.877 [2024-07-24 20:02:28.360892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.877 [2024-07-24 20:02:28.363718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.877 [2024-07-24 20:02:28.373083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.877 [2024-07-24 20:02:28.373634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.877 [2024-07-24 20:02:28.373651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.877 [2024-07-24 20:02:28.373658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.877 [2024-07-24 20:02:28.373835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.877 [2024-07-24 20:02:28.374012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.877 [2024-07-24 20:02:28.374020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.877 [2024-07-24 20:02:28.374027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.877 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:36.877 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:26:36.877 [2024-07-24 20:02:28.376849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.877 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:36.877 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:36.877 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.877 [2024-07-24 20:02:28.386211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.877 [2024-07-24 20:02:28.386752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.877 [2024-07-24 20:02:28.386769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.877 [2024-07-24 20:02:28.386777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.877 [2024-07-24 20:02:28.386953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.877 [2024-07-24 20:02:28.387135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.877 [2024-07-24 20:02:28.387144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.877 [2024-07-24 20:02:28.387150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.877 [2024-07-24 20:02:28.389969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.877 [2024-07-24 20:02:28.399300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.877 [2024-07-24 20:02:28.399839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.877 [2024-07-24 20:02:28.399855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.877 [2024-07-24 20:02:28.399862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.877 [2024-07-24 20:02:28.400038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.877 [2024-07-24 20:02:28.400221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.877 [2024-07-24 20:02:28.400245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.877 [2024-07-24 20:02:28.400256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.877 [2024-07-24 20:02:28.403086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.877 [2024-07-24 20:02:28.412419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.877 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.877 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.877 [2024-07-24 20:02:28.413032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.877 [2024-07-24 20:02:28.413054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.877 [2024-07-24 20:02:28.413062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.877 [2024-07-24 20:02:28.413238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.877 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.877 [2024-07-24 20:02:28.413416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.877 [2024-07-24 20:02:28.413426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.877 [2024-07-24 20:02:28.413432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.877 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.877 [2024-07-24 20:02:28.416435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.877 [2024-07-24 20:02:28.418570] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.877 [2024-07-24 20:02:28.425598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.877 [2024-07-24 20:02:28.426211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.877 [2024-07-24 20:02:28.426227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.877 [2024-07-24 20:02:28.426234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.877 [2024-07-24 20:02:28.426410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.877 [2024-07-24 20:02:28.426587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.877 [2024-07-24 20:02:28.426595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.877 [2024-07-24 20:02:28.426602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.877 [2024-07-24 20:02:28.429428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.877 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.877 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:36.877 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.877 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.877 [2024-07-24 20:02:28.438745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.877 [2024-07-24 20:02:28.439423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.878 [2024-07-24 20:02:28.439439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.878 [2024-07-24 20:02:28.439450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.878 [2024-07-24 20:02:28.439627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.878 [2024-07-24 20:02:28.439803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.878 [2024-07-24 20:02:28.439811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.878 [2024-07-24 20:02:28.439818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.878 [2024-07-24 20:02:28.442642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.878 [2024-07-24 20:02:28.451824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.878 [2024-07-24 20:02:28.452534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.878 [2024-07-24 20:02:28.452553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.878 [2024-07-24 20:02:28.452560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.878 [2024-07-24 20:02:28.452738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.878 [2024-07-24 20:02:28.452915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.878 [2024-07-24 20:02:28.452923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.878 [2024-07-24 20:02:28.452929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.878 Malloc0 00:26:36.878 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.878 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.878 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.878 [2024-07-24 20:02:28.455750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.878 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.878 [2024-07-24 20:02:28.464924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.878 [2024-07-24 20:02:28.465609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.878 [2024-07-24 20:02:28.465625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa14980 with addr=10.0.0.2, port=4420 00:26:36.878 [2024-07-24 20:02:28.465633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14980 is same with the state(5) to be set 00:26:36.878 [2024-07-24 20:02:28.465810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14980 (9): Bad file descriptor 00:26:36.878 [2024-07-24 20:02:28.465986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.878 [2024-07-24 20:02:28.465994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.878 [2024-07-24 20:02:28.466000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.878 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.878 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:36.878 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.878 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.878 [2024-07-24 20:02:28.468827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.137 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.137 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:37.137 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.137 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.137 [2024-07-24 20:02:28.477987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:37.137 [2024-07-24 20:02:28.478154] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:37.137 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.137 20:02:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2200198 00:26:37.137 [2024-07-24 20:02:28.509228] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:47.124 00:26:47.124 Latency(us) 00:26:47.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.124 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:47.124 Verification LBA range: start 0x0 length 0x4000 00:26:47.124 Nvme1n1 : 15.01 8369.03 32.69 12191.63 0.00 6204.91 1082.77 27924.03 00:26:47.124 =================================================================================================================== 00:26:47.124 Total : 8369.03 32.69 12191.63 0.00 6204.91 1082.77 27924.03 00:26:47.124 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:47.124 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:47.124 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.124 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.124 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.124 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:47.124 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:47.124 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:47.124 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:47.124 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:47.124 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:47.125 rmmod nvme_tcp 00:26:47.125 rmmod nvme_fabrics 00:26:47.125 rmmod nvme_keyring 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2201123 ']' 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2201123 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2201123 ']' 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2201123 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2201123 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2201123' 00:26:47.125 killing process with pid 2201123 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2201123 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2201123 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.125 20:02:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.064 20:02:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:48.064 00:26:48.064 real 0m26.620s 00:26:48.064 user 1m3.484s 00:26:48.064 sys 0m6.465s 00:26:48.064 20:02:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:48.064 20:02:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.064 ************************************ 00:26:48.064 END TEST nvmf_bdevperf 00:26:48.064 ************************************ 00:26:48.064 20:02:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:48.064 20:02:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:48.064 20:02:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:48.064 20:02:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.325 ************************************ 00:26:48.325 START TEST nvmf_target_disconnect 00:26:48.325 ************************************ 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:48.325 * Looking for test storage... 00:26:48.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:48.325 20:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:53.652 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:53.652 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:53.652 Found net devices under 0000:86:00.0: cvl_0_0 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:53.652 Found net devices under 0000:86:00.1: cvl_0_1 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.652 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:53.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:26:53.653 00:26:53.653 --- 10.0.0.2 ping statistics --- 00:26:53.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.653 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:26:53.653 00:26:53.653 --- 10.0.0.1 ping statistics --- 00:26:53.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.653 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:53.653 ************************************ 00:26:53.653 START TEST nvmf_target_disconnect_tc1 00:26:53.653 ************************************ 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:53.653 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.653 [2024-07-24 20:02:44.694891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.653 [2024-07-24 20:02:44.694987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10c8e60 with addr=10.0.0.2, port=4420 00:26:53.653 [2024-07-24 20:02:44.695032] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:53.653 [2024-07-24 20:02:44.695072] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:53.653 [2024-07-24 20:02:44.695091] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:53.653 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:53.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:53.653 Initializing NVMe Controllers 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:53.653 00:26:53.653 real 0m0.093s 00:26:53.653 user 0m0.039s 00:26:53.653 sys 0m0.054s 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:53.653 ************************************ 00:26:53.653 END TEST nvmf_target_disconnect_tc1 00:26:53.653 ************************************ 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:53.653 ************************************ 00:26:53.653 START TEST nvmf_target_disconnect_tc2 00:26:53.653 ************************************ 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2206064 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2206064 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2206064 ']' 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:53.653 20:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:53.653 [2024-07-24 20:02:44.815754] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:26:53.653 [2024-07-24 20:02:44.815789] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.653 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.653 [2024-07-24 20:02:44.887787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:53.653 [2024-07-24 20:02:44.966226] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.653 [2024-07-24 20:02:44.966263] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.653 [2024-07-24 20:02:44.966269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.654 [2024-07-24 20:02:44.966275] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.654 [2024-07-24 20:02:44.966280] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.654 [2024-07-24 20:02:44.966859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:53.654 [2024-07-24 20:02:44.966885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:53.654 [2024-07-24 20:02:44.966971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:53.654 [2024-07-24 20:02:44.966972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.267 Malloc0 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.267 [2024-07-24 20:02:45.669081] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.267 [2024-07-24 20:02:45.697309] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2206311 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:54.267 20:02:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:54.267 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.176 20:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2206064 00:26:56.176 20:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Write completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Write completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Write completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Write completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Write completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Write completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Write completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Write completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Write completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Write completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Write completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Write completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 [2024-07-24 20:02:47.723314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.176 Read completed with error (sct=0, sc=8) 00:26:56.176 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 [2024-07-24 20:02:47.723522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Write completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 Read completed with error (sct=0, sc=8) 00:26:56.177 starting I/O failed 00:26:56.177 [2024-07-24 20:02:47.723722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.177 [2024-07-24 20:02:47.724158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.177 [2024-07-24 20:02:47.724175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.177 qpair failed and we were unable to recover it. 00:26:56.177 [2024-07-24 20:02:47.724578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.177 [2024-07-24 20:02:47.724588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.177 qpair failed and we were unable to recover it. 00:26:56.177 [2024-07-24 20:02:47.725016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.177 [2024-07-24 20:02:47.725058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.177 qpair failed and we were unable to recover it. 00:26:56.177 [2024-07-24 20:02:47.725503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.177 [2024-07-24 20:02:47.725533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.177 qpair failed and we were unable to recover it. 00:26:56.177 [2024-07-24 20:02:47.725983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.177 [2024-07-24 20:02:47.726012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.177 qpair failed and we were unable to recover it. 00:26:56.177 [2024-07-24 20:02:47.726414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.177 [2024-07-24 20:02:47.726444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.177 qpair failed and we were unable to recover it. 00:26:56.177 [2024-07-24 20:02:47.726960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.177 [2024-07-24 20:02:47.726970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.177 qpair failed and we were unable to recover it. 00:26:56.177 [2024-07-24 20:02:47.727360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.177 [2024-07-24 20:02:47.727370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.177 qpair failed and we were unable to recover it. 00:26:56.177 [2024-07-24 20:02:47.727740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.177 [2024-07-24 20:02:47.727770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.177 qpair failed and we were unable to recover it. 00:26:56.177 [2024-07-24 20:02:47.728273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.177 [2024-07-24 20:02:47.728305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.177 qpair failed and we were unable to recover it. 00:26:56.177 [2024-07-24 20:02:47.728616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.177 [2024-07-24 20:02:47.728626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.177 qpair failed and we were unable to recover it. 00:26:56.177 [2024-07-24 20:02:47.729050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.177 [2024-07-24 20:02:47.729060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.177 qpair failed and we were unable to recover it. 00:26:56.177 [2024-07-24 20:02:47.729411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.177 [2024-07-24 20:02:47.729441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.177 qpair failed and we were unable to recover it. 00:26:56.177 [2024-07-24 20:02:47.729959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.177 [2024-07-24 20:02:47.729989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.177 qpair failed and we were unable to recover it. 00:26:56.177 [2024-07-24 20:02:47.730434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.177 [2024-07-24 20:02:47.730465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.177 qpair failed and we were unable to recover it. 00:26:56.177 [2024-07-24 20:02:47.730788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.177 [2024-07-24 20:02:47.730817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.177 qpair failed and we were unable to recover it. 00:26:56.177 [2024-07-24 20:02:47.731197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.731228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.731650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.731680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.732063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.732094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.732606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.732647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.733134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.733147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2d8000b90 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.733662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.733688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.734190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.734225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.734655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.734685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.735058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.735090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.735547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.735577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.736094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.736127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.736570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.736599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.737096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.737128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.737581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.737611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.737986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.737999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.738427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.738441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.738890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.738903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.739334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.739349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.739756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.739770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.740174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.740189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.740615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.740628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.740976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.740990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.741339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.741353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.741828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.741841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.742281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.742311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.742688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.742719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.743204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.743246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.743698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.743711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.744078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.744109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.744592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.744626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.744945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.744958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.745411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.745425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.745878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.745914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.746403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.746434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.746949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.746980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.747423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.747453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.178 [2024-07-24 20:02:47.747829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.178 [2024-07-24 20:02:47.747859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.178 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.748295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.748326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.748750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.748780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.749236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.749267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.749777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.749807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.750317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.750348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.750859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.750889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.751345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.751377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.751822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.751851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.752087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.752118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.752564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.752594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.752952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.752981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.753479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.753511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.753968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.753998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.754427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.754458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.754940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.754970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.755480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.755511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.755996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.756026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.756541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.756572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.757063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.757094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.757540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.757570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.757983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.757996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.758444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.758458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.758915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.758950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.759437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.759468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.759979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.760009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.760439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.760470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.760854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.760884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.761263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.761294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.761675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.761704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.762152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.762189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.762518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.762531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.762936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.762966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.763474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.763505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.764012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.764060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.764516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.764546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.764923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.764952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.179 qpair failed and we were unable to recover it. 00:26:56.179 [2024-07-24 20:02:47.765502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.179 [2024-07-24 20:02:47.765533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.180 qpair failed and we were unable to recover it. 00:26:56.180 [2024-07-24 20:02:47.765962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.180 [2024-07-24 20:02:47.765992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.180 qpair failed and we were unable to recover it. 00:26:56.180 [2024-07-24 20:02:47.766536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.180 [2024-07-24 20:02:47.766566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.180 qpair failed and we were unable to recover it. 00:26:56.180 [2024-07-24 20:02:47.766997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.180 [2024-07-24 20:02:47.767027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.180 qpair failed and we were unable to recover it. 00:26:56.180 [2024-07-24 20:02:47.767441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.180 [2024-07-24 20:02:47.767471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.180 qpair failed and we were unable to recover it. 00:26:56.180 [2024-07-24 20:02:47.767903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.180 [2024-07-24 20:02:47.767917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.180 qpair failed and we were unable to recover it. 00:26:56.180 [2024-07-24 20:02:47.768333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.180 [2024-07-24 20:02:47.768347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.180 qpair failed and we were unable to recover it. 00:26:56.180 [2024-07-24 20:02:47.768821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.180 [2024-07-24 20:02:47.768834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.180 qpair failed and we were unable to recover it. 00:26:56.180 [2024-07-24 20:02:47.769307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.180 [2024-07-24 20:02:47.769321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.180 qpair failed and we were unable to recover it. 00:26:56.180 [2024-07-24 20:02:47.769531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.180 [2024-07-24 20:02:47.769545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.180 qpair failed and we were unable to recover it. 00:26:56.180 [2024-07-24 20:02:47.769957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.180 [2024-07-24 20:02:47.769986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.180 qpair failed and we were unable to recover it. 00:26:56.180 [2024-07-24 20:02:47.770526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.180 [2024-07-24 20:02:47.770557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.180 qpair failed and we were unable to recover it. 00:26:56.449 [2024-07-24 20:02:47.771067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.449 [2024-07-24 20:02:47.771101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.449 qpair failed and we were unable to recover it. 00:26:56.449 [2024-07-24 20:02:47.771588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.449 [2024-07-24 20:02:47.771624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.449 qpair failed and we were unable to recover it. 00:26:56.449 [2024-07-24 20:02:47.772061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.449 [2024-07-24 20:02:47.772076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.449 qpair failed and we were unable to recover it. 00:26:56.449 [2024-07-24 20:02:47.772467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.449 [2024-07-24 20:02:47.772498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.449 qpair failed and we were unable to recover it. 00:26:56.449 [2024-07-24 20:02:47.772868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.449 [2024-07-24 20:02:47.772898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.449 qpair failed and we were unable to recover it. 00:26:56.449 [2024-07-24 20:02:47.773354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.449 [2024-07-24 20:02:47.773385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.449 qpair failed and we were unable to recover it. 00:26:56.449 [2024-07-24 20:02:47.773892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.449 [2024-07-24 20:02:47.773922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.449 qpair failed and we were unable to recover it. 00:26:56.449 [2024-07-24 20:02:47.774362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.449 [2024-07-24 20:02:47.774393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.449 qpair failed and we were unable to recover it. 00:26:56.449 [2024-07-24 20:02:47.774903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.449 [2024-07-24 20:02:47.774933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.449 qpair failed and we were unable to recover it. 00:26:56.449 [2024-07-24 20:02:47.775432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.775463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.775919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.775949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.776319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.776349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.776772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.776786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.777256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.777298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.777489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.777519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.777962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.777976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.778452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.778466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.778854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.778867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.779294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.779308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.779719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.779748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.780254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.780285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.780702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.780731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.781112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.781144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.781495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.781525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.781952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.781982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.782309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.782340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.782823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.782852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.783284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.783315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.783810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.783840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.784365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.784396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.784903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.784934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.785405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.785436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.785898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.785928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.786426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.786457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.786811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.786841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.787374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.787405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.787828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.787857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.788369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.788399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.788882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.788912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.789346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.789378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.789881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.789911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.790419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.790450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.790904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.790934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.791356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.450 [2024-07-24 20:02:47.791387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.450 qpair failed and we were unable to recover it. 00:26:56.450 [2024-07-24 20:02:47.791902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.791932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.792303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.792334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.792842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.792877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.793293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.793324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.793829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.793862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.794365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.794396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.794827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.794857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.795245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.795276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.795759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.795788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.796224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.796254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.796498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.796527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.797014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.797054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.797528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.797558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.798024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.798066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.798578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.798607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.799137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.799169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.799600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.799630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.800139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.800170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.800676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.800706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.801141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.801172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.801651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.801681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.802190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.802220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.802477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.802507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.802960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.802989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.803434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.803465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.803899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.803934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.804391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.804423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.804925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.804954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.805464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.805495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.805935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.805965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.806433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.806464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.806888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.806918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.807429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.807459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.807962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.807992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.808487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.808518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.809025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.809065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.451 [2024-07-24 20:02:47.809430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.451 [2024-07-24 20:02:47.809460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.451 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.809939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.809953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.810374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.810405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.810877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.810907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.811397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.811412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.811791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.811805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.812280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.812294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.812748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.812777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.813270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.813301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.813756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.813786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.814273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.814304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.814724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.814755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.815247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.815277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.815713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.815743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.816255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.816285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.816741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.816771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.817209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.817244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.817765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.817795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.818313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.818344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.818667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.818696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.819073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.819103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.819607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.819637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.820147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.820178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.820619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.820649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.821156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.821188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.821677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.821707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.822220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.822251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.822759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.822789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.823215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.823229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.823709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.823739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.824230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.824261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.824786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.824815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.825274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.825304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.825755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.825784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.826268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.826299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.826810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.826840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.827333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.452 [2024-07-24 20:02:47.827363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.452 qpair failed and we were unable to recover it. 00:26:56.452 [2024-07-24 20:02:47.827811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.827841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.828273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.828304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.828745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.828775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.829194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.829225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.829683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.829713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.830091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.830122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.830602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.830638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.831066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.831097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.831526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.831557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.831975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.832004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.832412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.832443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.832895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.832925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.833412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.833443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.833872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.833902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.834335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.834365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.834899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.834928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.835412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.835443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.835829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.835859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.836345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.836375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.836857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.836887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.837312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.837343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.837784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.837814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.838299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.838330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.838814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.838844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.839294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.839324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.839759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.839788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.840039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.840060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.840406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.840420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.840875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.840905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.841265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.841296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.841771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.841784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.842262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.842293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.842781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.842811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.843299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.843329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.843788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.843818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.453 qpair failed and we were unable to recover it. 00:26:56.453 [2024-07-24 20:02:47.844303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.453 [2024-07-24 20:02:47.844333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.844775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.844805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.845239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.845269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.845750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.845780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.846290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.846329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.846746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.846760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.847101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.847132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.847376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.847405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.847836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.847866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.848297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.848340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.848727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.848756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.849265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.849296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.849492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.849527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.850035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.850086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.850598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.850628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.851161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.851192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.851626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.851655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.852116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.852147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.852532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.852562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.852990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.853020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.853533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.853563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.853931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.853961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.854406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.854437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.854868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.854898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.855342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.855373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.856615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.856645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.857175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.454 [2024-07-24 20:02:47.857208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.454 qpair failed and we were unable to recover it. 00:26:56.454 [2024-07-24 20:02:47.857699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.857739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.858196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.858228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.858715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.858745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.859239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.859253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.859592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.859606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.860128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.860167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.860566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.860580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.861035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.861073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.861534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.861564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.861981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.862011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.862454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.862485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.862810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.862840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.863209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.863246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.863755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.863784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.864249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.864280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.864741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.864771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.865152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.865166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.865566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.865580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.865972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.865986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.866168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.866182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.866582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.866612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.866988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.867017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.867406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.867437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.867951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.867980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.868389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.868403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.868801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.868831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.869205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.869236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.869724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.869754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.870189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.870219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.870734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.870763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.871146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.871177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.871548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.871578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.872021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.872088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.872541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.872571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.872923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.455 [2024-07-24 20:02:47.872953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.455 qpair failed and we were unable to recover it. 00:26:56.455 [2024-07-24 20:02:47.873435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.873450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.873929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.873960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.874377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.874407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.874848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.874878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.875296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.875332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.875748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.875778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.876210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.876224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.876609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.876639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.877066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.877097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.877455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.877485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.878003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.878033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.878471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.878485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.878823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.878853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.879296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.879327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.879751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.879782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.880217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.880248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.880733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.880762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.881182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.881214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.881590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.881620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.882094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.882125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.882560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.882589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.883085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.883116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.883503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.883533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.884041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.884080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.884564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.884594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.885016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.885057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.885543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.885574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.885993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.886023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.886465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.886496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.886946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.886976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.887334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.887348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.887768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.887799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.888257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.888288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.888714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.888744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.889276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.456 [2024-07-24 20:02:47.889290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.456 qpair failed and we were unable to recover it. 00:26:56.456 [2024-07-24 20:02:47.889805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.889836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.890200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.890230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.890763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.890793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.891212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.891227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.891627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.891656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.892102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.892132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.892568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.892598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.893108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.893139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.893559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.893589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.894017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.894054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.894610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.894640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.895080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.895111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.895620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.895650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.896133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.896164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.896543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.896573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.897000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.897029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.897402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.897416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.897884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.897914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.898400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.898431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.898865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.898895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.899381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.899412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.899845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.899875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.900240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.900254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.900727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.900762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.901197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.901228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.901601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.901631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.902057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.902088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.902527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.902557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.903000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.903029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.903547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.903577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.904080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.904111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.904561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.904591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.905021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.905059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.905507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.905537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.905973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.457 [2024-07-24 20:02:47.906003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.457 qpair failed and we were unable to recover it. 00:26:56.457 [2024-07-24 20:02:47.906444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.906474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.906981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.907011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.907401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.907437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.907801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.907832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.908341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.908372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.908879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.908909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.909335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.909366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.909870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.909884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.910286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.910316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.910769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.910799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.911269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.911283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.911761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.911775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.912255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.912286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.912707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.912736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.913250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.913281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.913789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.913819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.914333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.914364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.914732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.914761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.915245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.915275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.915747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.915761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.916225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.916256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.916695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.916725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.917149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.917163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.917560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.917590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.918039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.918078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.918563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.918594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.919020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.919057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.919568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.919598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.920088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.920102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.920510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.920545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.921062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.921093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.921553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.921583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.922019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.922057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.922552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.922582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.923009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.923039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.458 qpair failed and we were unable to recover it. 00:26:56.458 [2024-07-24 20:02:47.923499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.458 [2024-07-24 20:02:47.923529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.924013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.924052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.924216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.924229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.924689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.924719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.925209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.925240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.925753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.925783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.926218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.926232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.926717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.926746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.927160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.927174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.927579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.927609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.928115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.928146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.928591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.928620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.929102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.929134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.929563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.929593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.930013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.930051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.930594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.930624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.931107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.931138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.931570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.931600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.932083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.932116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.932592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.932605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.933061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.933075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.933463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.933498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.933924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.933954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.934194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.934209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.934606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.934636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.935137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.935167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.935675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.935705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.936119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.936135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.936353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.936383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.936888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.936918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.459 [2024-07-24 20:02:47.937363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.459 [2024-07-24 20:02:47.937394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.459 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.937880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.937909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.938339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.938369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.938801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.938831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.939267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.939297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.939691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.939721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.940241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.940272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.940649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.940679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.941184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.941215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.941723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.941753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.942187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.942218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.942704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.942733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.943218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.943249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.943711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.943740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.944220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.944251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.944687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.944717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.945150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.945180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.945611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.945641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.946171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.946201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.946721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.946751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.947231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.947244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.947641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.947671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.948155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.948186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.948627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.948657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.949137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.949152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.949612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.949642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.950098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.950129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.950620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.950649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.951158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.951189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.951645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.951675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.952161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.952215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.952614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.952643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.952922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.952957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.953466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.953498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.954008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.460 [2024-07-24 20:02:47.954047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.460 qpair failed and we were unable to recover it. 00:26:56.460 [2024-07-24 20:02:47.954343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.954357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.954759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.954773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.955196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.955227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.955663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.955693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.956179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.956210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.956742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.956772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.957095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.957127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.957604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.957634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.958115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.958146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.958567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.958596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.959104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.959134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.959575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.959605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.960074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.960106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.960628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.960658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.961109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.961140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.961579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.961608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.962116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.962148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.962615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.962645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.963060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.963091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.963596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.963610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.964008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.964038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.964556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.964586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.965122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.965153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.965642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.965673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.966179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.966215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.966730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.966760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.967296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.967327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.967827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.967858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.968379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.968409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.968959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.968989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.969560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.969591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.970011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.461 [2024-07-24 20:02:47.970040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.461 qpair failed and we were unable to recover it. 00:26:56.461 [2024-07-24 20:02:47.970511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.970541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.971035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.971075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.971618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.971648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.972170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.972201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.972756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.972785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.973295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.973326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.973786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.973816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.974320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.974334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.974768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.974782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.975255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.975270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.975738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.975752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.976267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.976298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.976724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.976754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.977168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.977182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.977657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.977670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.978171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.978185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.978588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.978602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.979067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.979081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.979550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.979564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.979981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.980015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.980540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.980571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.981078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.981093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.981497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.981511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.981961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.981975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.982456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.982487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.982970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.983000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.983516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.983546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.983982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.984013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.984389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.984404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.984862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.984892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.985393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.985407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.985833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.985847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.986259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.986273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.986679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.986692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.987195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.462 [2024-07-24 20:02:47.987227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.462 qpair failed and we were unable to recover it. 00:26:56.462 [2024-07-24 20:02:47.987714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.987743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.988229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.988243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.988716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.988730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.989226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.989242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.989693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.989708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.990126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.990141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.990622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.990636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.991106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.991120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.991605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.991618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.992139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.992154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.992778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.992808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.993309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.993324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.993849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.993864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.994337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.994351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.994762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.994776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.995250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.995265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.995717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.995731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.996081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.996095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.996503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.996517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.996924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.996937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.997405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.997419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.997933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.997947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.998416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.998430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.998841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.998855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 [2024-07-24 20:02:47.999266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.463 [2024-07-24 20:02:47.999280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:56.463 qpair failed and we were unable to recover it. 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Write completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Write completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Write completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Write completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Write completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Write completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Write completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Read completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Write completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Write completed with error (sct=0, sc=8) 00:26:56.463 starting I/O failed 00:26:56.463 Write completed with error (sct=0, sc=8) 00:26:56.464 starting I/O failed 00:26:56.464 [2024-07-24 20:02:47.999490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.464 [2024-07-24 20:02:47.999657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e62ff0 is same with the state(5) to be set 00:26:56.464 [2024-07-24 20:02:48.000158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.000194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.000564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.000580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.001062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.001078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.001478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.001492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.001958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.001973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.002455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.002471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.002997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.003014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.003461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.003472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.003881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.003892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.004243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.004254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.004771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.004782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.005187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.005199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.005587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.005598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.006087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.006098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.006514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.006524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.006891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.006901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.007350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.007382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.007922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.007952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.008469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.008479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.008941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.008952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.009437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.009448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.009903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.009913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.010305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.010315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.010725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.010736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.011123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.011133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.011600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.011611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.011999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.012009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-24 20:02:48.012405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.464 [2024-07-24 20:02:48.012416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.012906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.012918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.013422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.013434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.013903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.013913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.014333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.014344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.014835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.014846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.015338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.015349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.015739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.015749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.016208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.016220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.016560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.016570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.016926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.016936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.017429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.017440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.017859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.017870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.018272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.018282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.018736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.018747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.019262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.019274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.019783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.019793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.020296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.020306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.020712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.020722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.021177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.021190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.021589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.021599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.022092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.022103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.022546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.022557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.022972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.022984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.023428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.023438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.023849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.023859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.024324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.024334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.024681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.024691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.025159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.025169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.025500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.025511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.025922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.025933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.026421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.026432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.026913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.026924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.027420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.027430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-24 20:02:48.027877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.465 [2024-07-24 20:02:48.027887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.028344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.028376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.028921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.028931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.029426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.029437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.029838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.029848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.030314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.030346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.030861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.030871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.031263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.031273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.031677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.031687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.032037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.032077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.032462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.032493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.033002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.033012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.033421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.033433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.033895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.033905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.034451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.034483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.034879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.034909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.035375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.035407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.035808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.035839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.036273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.036305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-24 20:02:48.036742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.466 [2024-07-24 20:02:48.036772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.037334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.037366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.037821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.037853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.038344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.038355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.038753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.038764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.039170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.039181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.039584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.039621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.040115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.040146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.040591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.040621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.041082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.041113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.041600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.041630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.042143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.042175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.042610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.042642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.043190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.043221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.043713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.043744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.044196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.044208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.044674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.044704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.045160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.045192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.045626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.045637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.046137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.046169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.046707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.046737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.047124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.047156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.047599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.047628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.048083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.048114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.736 qpair failed and we were unable to recover it. 00:26:56.736 [2024-07-24 20:02:48.048629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.736 [2024-07-24 20:02:48.048660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.049196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.049227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.049740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.049751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.050243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.050253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.050659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.050689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.051153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.051185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.051559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.051590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.052132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.052163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.052723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.052753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.053227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.053258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.053769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.053799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.054498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.054529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.054991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.055021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.055528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.055559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.055959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.055989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.056540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.056572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.057082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.057115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.057627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.057657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.058165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.058197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.058729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.058759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.059218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.059228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.059567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.059607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.060135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.060172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.060710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.060739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.061224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.061256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.061761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.061771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.062260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.062291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.062735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.062766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.063231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.063262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.063754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.063784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.064265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.064276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.064614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.064643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.065086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.065117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.065619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.065650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.066166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.066197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.066658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.066688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.737 [2024-07-24 20:02:48.067123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.737 [2024-07-24 20:02:48.067155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.737 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.067642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.067672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.068164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.068195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.068590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.068621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.069174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.069205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.069593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.069623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.070080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.070111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.070643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.070674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.071158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.071189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.071623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.071653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.072106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.072137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.072577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.072608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.073078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.073110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.073773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.073803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.074315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.074346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.074835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.074865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.075391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.075422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.075805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.075815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.076277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.076288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.076693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.076723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.077198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.077229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.077622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.077652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.078087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.078118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.078613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.078643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.079075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.079107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.079613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.079643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.080144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.080180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.080687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.080717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.081405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.081436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.081877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.081907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.082340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.082371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.082838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.082869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.083314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.083345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.083727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.083758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.084263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.084294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.084781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.084811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.085327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.738 [2024-07-24 20:02:48.085372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.738 qpair failed and we were unable to recover it. 00:26:56.738 [2024-07-24 20:02:48.085711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.085721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.086206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.086237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.086874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.086904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.087353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.087384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.087817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.087827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.088219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.088229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.088620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.088650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.089168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.089200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.089635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.089666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.090098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.090129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.090817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.090847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.091361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.091393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.091896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.091926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.092413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.092444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.092828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.092858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.093291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.093322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.093837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.093868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.094258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.094290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.094747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.094776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.095312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.095343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.095802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.095833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.096363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.096373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.096814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.096844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.097336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.097368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.097882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.097913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.098432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.098464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.098902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.098932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.099363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.099394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.099893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.099923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.100393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.100429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.100959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.100990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.101631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.101662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.102164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.102196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.102638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.102668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.103123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.103154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.739 [2024-07-24 20:02:48.103576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.739 [2024-07-24 20:02:48.103607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.739 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.104040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.104078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.104520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.104549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.104999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.105029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.105505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.105535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.106011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.106040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.106529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.106560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.107151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.107183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.107619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.107650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.108037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.108075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.108513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.108544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.109069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.109080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.109508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.109539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.110062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.110111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.110562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.110593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.110974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.110984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.111382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.111393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.111852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.111863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.112329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.112362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.112750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.112781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.113219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.113250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.113697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.113707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.114128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.114138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.114499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.114529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.115025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.115064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.115512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.115542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.116081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.116113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.116572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.116602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.117074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.117105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.117566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.117596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.118146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.118178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.118623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.118653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.119093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.119124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.119558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.119606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.120127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.120164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.120637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.120681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.121201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.740 [2024-07-24 20:02:48.121232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.740 qpair failed and we were unable to recover it. 00:26:56.740 [2024-07-24 20:02:48.121656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.121686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.122227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.122259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.122716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.122746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.123280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.123311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.123732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.123762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.124288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.124320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.124766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.124796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.125238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.125269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.125703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.125733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.126268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.126300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.126781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.126812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.127272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.127303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.127700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.127741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.128255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.128266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.128669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.128699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.129142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.129174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.129615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.129646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.130274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.130304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.130750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.130780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.131282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.131313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.131772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.131802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.132292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.132324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.132768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.132798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.133298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.133329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.133783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.133813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.134346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.134377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.134847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.134878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.135327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.135357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.135831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.741 [2024-07-24 20:02:48.135862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.741 qpair failed and we were unable to recover it. 00:26:56.741 [2024-07-24 20:02:48.136316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.136348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.136826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.136857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.137352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.137383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.137825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.137856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.138371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.138402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.138845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.138876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.139413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.139445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.139875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.139906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.140367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.140403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.140802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.140832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.141221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.141252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.141650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.141680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.142143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.142175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.142570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.142600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.143052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.143084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.143483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.143514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.143999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.144029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.144556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.144587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.145136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.145168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.145614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.145645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.146121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.146152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.146597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.146627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.147153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.147186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.147656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.147686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.148241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.148272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.148740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.148769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.149208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.149240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.149630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.149640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.150041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.150055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.150485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.150515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.150966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.150996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.151518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.151549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.151984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.152014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.152555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.152587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.153034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.153084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.153493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.153524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.742 [2024-07-24 20:02:48.153995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.742 [2024-07-24 20:02:48.154026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.742 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.154472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.154503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.154949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.154981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.155412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.155443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.155858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.155889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.156358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.156389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.156838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.156868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.157320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.157351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.157790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.157800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.158259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.158290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.158758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.158791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.159265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.159277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.159689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.159701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.160228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.160274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.160802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.160832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.161418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.161450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.161896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.161926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.162460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.162492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.163002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.163032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.163503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.163534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.164030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.164071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.164478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.164507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.164905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.164935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.165438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.165470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.165913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.165943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.166424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.166468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.166875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.166905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.167374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.167406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.167857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.167887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.168344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.168375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.168822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.168852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.169289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.169321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.169753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.169783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.170249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.170260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.170738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.170769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.171287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.171318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.171764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.743 [2024-07-24 20:02:48.171794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.743 qpair failed and we were unable to recover it. 00:26:56.743 [2024-07-24 20:02:48.172195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.172226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.172702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.172732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.173245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.173256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.173674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.173685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.174092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.174102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.174455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.174483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.174938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.174969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.175518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.175549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.176035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.176073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.176521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.176551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.176937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.176967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.177460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.177494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.177975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.177985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.178453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.178485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.179006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.179036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.179440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.179475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.179851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.179880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.180423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.180454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.180851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.180881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.181407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.181418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.181923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.181953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.182452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.182483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.183017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.183056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.183575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.183605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.184081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.184112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.184635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.184666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.185098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.185109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.185506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.185517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.185940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.185970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.186470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.186503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.186896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.186926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.187480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.187510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.188037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.188085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.188524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.188555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.189094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.189125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.189695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.189725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.744 qpair failed and we were unable to recover it. 00:26:56.744 [2024-07-24 20:02:48.190268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.744 [2024-07-24 20:02:48.190299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.190745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.190776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.191270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.191302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.191753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.191783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.192310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.192342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.192817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.192847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.193295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.193327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.193821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.193852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.194368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.194400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.194810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.194841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.195262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.195273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.195887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.195896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.196370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.196381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.196835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.196865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.197330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.197362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.197863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.197893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.198336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.198346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.198770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.198800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.199262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.199294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.199690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.199725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.200206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.200238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.200675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.200704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.201159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.201191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.201627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.201657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.202123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.202155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.202702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.202732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.203417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.203447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.203994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.204023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.204442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.204474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.204994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.205024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.205501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.205532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.745 qpair failed and we were unable to recover it. 00:26:56.745 [2024-07-24 20:02:48.206034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.745 [2024-07-24 20:02:48.206075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.206543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.206574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.207130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.207163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.207723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.207753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.208179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.208210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.208603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.208635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.209017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.209054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.209498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.209529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.210035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.210074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.210619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.210650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.211194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.211243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.211784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.211813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.212382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.212414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.212851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.212882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.213325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.213357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.213920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.213950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.214435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.214445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.214844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.214874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.215415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.215447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.216014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.216053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.216497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.216527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.217018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.217059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.217620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.217649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.218181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.218213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.218691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.218721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.219174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.219205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.219696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.219726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.220196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.220227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.220744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.220780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.221317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.221348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.221788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.221818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.222268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.222300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.222699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.222729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.223259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.223269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.223772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.223803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.746 [2024-07-24 20:02:48.224332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.746 [2024-07-24 20:02:48.224364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.746 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.224803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.224833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.225259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.225269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.225721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.225752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.226272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.226304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.226783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.226812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.227356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.227366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.227839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.227849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.228327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.228359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.228803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.228833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.229508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.229538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.229993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.230024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.230657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.230687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.231188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.231198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.231542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.231553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.232048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.232060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.232472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.232503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.233015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.233025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.233474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.233485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.233848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.233878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.234431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.234463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.234983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.235014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.235506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.235536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.236027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.236038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.236459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.236471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.236826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.236836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.237335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.237346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.237773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.237783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.238182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.238193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.238544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.238555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.239059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.239070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.239514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.239525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.240009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.240021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.240529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.240542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.241103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.241114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.241566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.241577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.747 [2024-07-24 20:02:48.242047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.747 [2024-07-24 20:02:48.242060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.747 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.242662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.242673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.243188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.243220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.243696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.243726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.244241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.244253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.244660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.244671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.245144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.245156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.245561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.245571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.245944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.245954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.246435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.246448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.246847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.246857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.247275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.247286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.247760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.247770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.248271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.248282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.248707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.248718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.249182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.249193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.249633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.249644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.250311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.250322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.250733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.250744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.251197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.251228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.251742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.251772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.252278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.252289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.252715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.252725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.253129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.253140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.253486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.253497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.253979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.254009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.254521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.254552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.255017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.255028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.255511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.255522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.255868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.255879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.256260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.256271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.256735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.256745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.257428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.257461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.257991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.258022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.258603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.258634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.259149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.259160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.259565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.748 [2024-07-24 20:02:48.259576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.748 qpair failed and we were unable to recover it. 00:26:56.748 [2024-07-24 20:02:48.260107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.260118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.260542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.260553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.261061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.261071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.261508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.261539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.262057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.262068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.262558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.262568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.263062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.263073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.263557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.263569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.264072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.264083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.264495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.264506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.264963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.264973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.265392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.265404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.265859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.265871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.266397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.266408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.267013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.267023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.267503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.267514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.268003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.268014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.268493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.268504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.268922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.268932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.269378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.269389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.269754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.269766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.270285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.270297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.270744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.270754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.271227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.271238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.271680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.271690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.272179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.272190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.272595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.272605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.273004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.273016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.273412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.273423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.273779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.273789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.274237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.274248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.274696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.274707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.275169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.275180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.275581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.275592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.276188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.276199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.276660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.276670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.749 qpair failed and we were unable to recover it. 00:26:56.749 [2024-07-24 20:02:48.277118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.749 [2024-07-24 20:02:48.277128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.277489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.277519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.278021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.278062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.278578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.278608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.279124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.279156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.279636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.279667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.280267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.280298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.280812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.280843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.281375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.281406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.281899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.281930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.282393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.282424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.282811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.282842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.283359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.283391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.283779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.283809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.284336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.284346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.284757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.284787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.285326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.285358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.285796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.285826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.286265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.286296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.286690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.286720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.287247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.287278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.287670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.287700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.288219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.288250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.288692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.288722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.289251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.289263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.289664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.289675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.290152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.290183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.290587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.290617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.291132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.291163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.291601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.291632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.292067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.750 [2024-07-24 20:02:48.292099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.750 qpair failed and we were unable to recover it. 00:26:56.750 [2024-07-24 20:02:48.292563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.292598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.293143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.293154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.293519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.293549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.294038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.294077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.294509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.294520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.294927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.294957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.295415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.295447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.295857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.295888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.296376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.296407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.296853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.296884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.297386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.297417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.297870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.297900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.298340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.298371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.298836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.298866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.299320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.299331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.299744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.299774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.300275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.300306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.300744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.300774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.301167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.301198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.301691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.301721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.302236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.302267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.302708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.302739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.303171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.303182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.303576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.303607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.304063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.304094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.304551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.304582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.305170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.305202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.305663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.305693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.306207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.306237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.306752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.306783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.307220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.307231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.307586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.751 [2024-07-24 20:02:48.307616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.751 qpair failed and we were unable to recover it. 00:26:56.751 [2024-07-24 20:02:48.308118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.308149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.308601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.308633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.309085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.309095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.309455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.309465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.309864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.309874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.310358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.310389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.310930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.310960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.311397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.311428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.311866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.311901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.312432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.312464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.312852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.312882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.313323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.313354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.314062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.314096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.314634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.314665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.315190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.315221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.315692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.315722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.316253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.316265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.316664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.316693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.317161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.317192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.317585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.317614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.318174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.318186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.318595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.318606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.318949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.318960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.319397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.319407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.319758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.319789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.320268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.320299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.320789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.320819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.321369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.321380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.321825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.321835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.322311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.322323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.322789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.322809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.323335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.752 [2024-07-24 20:02:48.323346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.752 qpair failed and we were unable to recover it. 00:26:56.752 [2024-07-24 20:02:48.323697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.753 [2024-07-24 20:02:48.323727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:56.753 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.324238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.024 [2024-07-24 20:02:48.324271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.024 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.324734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.024 [2024-07-24 20:02:48.324764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.024 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.325162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.024 [2024-07-24 20:02:48.325193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.024 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.325633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.024 [2024-07-24 20:02:48.325644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.024 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.326084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.024 [2024-07-24 20:02:48.326095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.024 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.326582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.024 [2024-07-24 20:02:48.326612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.024 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.327124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.024 [2024-07-24 20:02:48.327155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.024 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.327667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.024 [2024-07-24 20:02:48.327697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.024 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.328251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.024 [2024-07-24 20:02:48.328283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.024 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.328754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.024 [2024-07-24 20:02:48.328785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.024 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.329228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.024 [2024-07-24 20:02:48.329259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.024 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.329699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.024 [2024-07-24 20:02:48.329729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.024 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.330261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.024 [2024-07-24 20:02:48.330292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.024 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.330759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.024 [2024-07-24 20:02:48.330788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.024 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.331266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.024 [2024-07-24 20:02:48.331276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.024 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.331728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.024 [2024-07-24 20:02:48.331770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.024 qpair failed and we were unable to recover it. 00:26:57.024 [2024-07-24 20:02:48.332332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.332363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.332867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.332897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.333426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.333457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.333875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.333914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.334382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.334412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.334803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.334833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.335334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.335345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.335695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.335705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.336132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.336163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.336599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.336629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.337088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.337119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.337609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.337639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.338180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.338212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.338611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.338642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.339098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.339129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.339575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.339605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.340138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.340170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.340615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.340644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.341092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.341123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.341598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.341629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.342059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.342089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.342528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.342567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.343097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.343128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.343620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.343650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.344181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.344213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.344941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.344974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.345526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.345559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.346029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.346067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.346558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.346589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.347100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.347132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.347602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.347613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.348070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.348102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.348571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.348601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.349085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.349117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.349576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.349606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.350060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.350091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.025 qpair failed and we were unable to recover it. 00:26:57.025 [2024-07-24 20:02:48.350697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.025 [2024-07-24 20:02:48.350728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.351233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.351243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.351593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.351603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.352064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.352101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.352548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.352579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.353018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.353058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.353636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.353667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.354142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.354173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.354684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.354714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.355151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.355183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.355618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.355649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.356086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.356117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.356584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.356614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.357215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.357246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.357614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.357624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.358121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.358153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.358591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.358621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.359182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.359214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.359648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.359677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.360171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.360181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.360583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.360614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.361127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.361159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.361551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.361582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.362111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.362141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.362638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.362668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.363181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.363212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.363650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.363681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.364211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.364242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.364694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.364725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.365189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.365220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.365712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.365722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.366175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.366206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.366674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.366705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.367310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.367342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.367738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.367767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.368264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.368307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.026 [2024-07-24 20:02:48.368756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.026 [2024-07-24 20:02:48.368787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.026 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.369220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.369230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.369654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.369685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.370143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.370154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.370495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.370505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.370966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.370996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.371424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.371455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.371856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.371892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.372432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.372443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.372977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.373007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.373547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.373578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.374133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.374144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.374510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.374540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.374997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.375028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.375472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.375502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.375939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.375969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.376458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.376490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.376939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.376982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.377396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.377427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.377816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.377846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.378360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.378392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.378867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.378898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.379392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.379423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.379875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.379906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.380361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.380393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.380836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.380866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.381376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.381407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.381798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.381829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.382309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.382341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.382733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.382763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.383279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.383290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.383689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.383699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.384108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.384142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.384684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.384714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.385283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.385314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.385808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.027 [2024-07-24 20:02:48.385837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.027 qpair failed and we were unable to recover it. 00:26:57.027 [2024-07-24 20:02:48.386247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.386280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.386725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.386756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.387311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.387342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.387794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.387825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.388277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.388309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.388704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.388733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.389170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.389201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.389636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.389666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.390160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.390192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.390585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.390616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.391124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.391155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.391620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.391656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.392147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.392178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.392626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.392656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.393156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.393187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.393628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.393659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.394093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.394124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.394633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.394663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.395231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.395241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.395710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.395720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.396142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.396173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.396587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.396618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.397164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.397195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.397667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.397698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.398089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.398120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.398568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.398599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.399133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.399165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.399725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.399755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.400318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.400350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.400766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.400796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.401196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.401227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.401624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.401655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.402215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.402245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.402701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.402731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.403180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.403212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.403629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.403660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.028 qpair failed and we were unable to recover it. 00:26:57.028 [2024-07-24 20:02:48.404127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.028 [2024-07-24 20:02:48.404158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.404624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.404655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.405177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.405209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.405678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.405709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.406297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.406329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.406725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.406756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.407309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.407341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.407802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.407812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.408286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.408317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.408835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.408866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.409320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.409352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.409829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.409840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.410333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.410364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.410815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.410846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.411341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.411374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.411761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.411797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.412288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.412320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.412762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.412792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.413182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.413214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.413711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.413721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.414162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.414193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.414688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.414719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.415192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.415224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.415665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.415695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.416178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.416209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.416653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.416684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.417155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.417187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.417681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.417711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.418187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.418218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.029 [2024-07-24 20:02:48.418690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.029 [2024-07-24 20:02:48.418720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.029 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.419392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.419422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.419997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.420028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.420556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.420587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.421136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.421168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.421612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.421642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.422107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.422138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.422655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.422686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.423203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.423235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.423696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.423726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.424259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.424269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.424627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.424638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.425158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.425190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.425646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.425677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.426194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.426226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.426675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.426705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.427264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.427295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.427790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.427821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.428271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.428303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.428774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.428805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.429346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.429377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.429779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.429809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.430226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.430259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.430653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.430684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.431158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.431168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.431529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.431559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.432100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.432142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.432689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.432719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.433223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.433233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.433895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.433926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.434445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.434477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.434895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.434927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.435381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.435413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.435864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.435894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.436470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.436502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.437064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.437096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.437535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.030 [2024-07-24 20:02:48.437566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.030 qpair failed and we were unable to recover it. 00:26:57.030 [2024-07-24 20:02:48.438135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.438167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.438709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.438740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.439252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.439284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.439800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.439831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.440311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.440342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.440837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.440868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.441395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.441427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.441880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.441911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.442417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.442449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.442901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.442933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.443429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.443460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.443912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.443943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.444462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.444495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.444895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.444925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.445460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.445492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.446118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.446150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.446604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.446635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.447140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.447174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.447576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.447607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.448151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.448183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.448636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.448667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.449219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.449251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.449732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.449762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.450207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.450238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.450680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.450710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.451242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.451274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.451807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.451838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.452343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.452374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.452935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.452965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.453479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.453516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.453997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.454028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.454531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.454562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.455131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.455162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.455642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.455672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.456197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.456228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.456705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.456736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.031 qpair failed and we were unable to recover it. 00:26:57.031 [2024-07-24 20:02:48.457311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.031 [2024-07-24 20:02:48.457343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.457819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.457850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.458393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.458425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.458992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.459022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.459519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.459550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.460054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.460086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.460546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.460576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.461142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.461174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.461679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.461709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.462218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.462250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.462722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.462754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.463304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.463336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.463944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.463975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.464542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.464574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.465066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.465097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.465559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.465590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.466153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.466185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.466716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.466746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.467258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.467290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.467738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.467769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.468313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.468345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.468921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.468951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.469520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.469552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.470080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.470110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.470575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.470584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.471074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.471083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.471591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.471600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.472108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.472117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.472636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.472646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.473079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.473089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.473545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.473555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.473952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.473963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.474437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.474449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.474873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.474907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.475424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.475436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.475875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.475886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.476345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.032 [2024-07-24 20:02:48.476357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.032 qpair failed and we were unable to recover it. 00:26:57.032 [2024-07-24 20:02:48.476761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.476773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.477229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.477240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.477667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.477678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.478199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.478210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.478683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.478693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.479224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.479235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.479745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.479756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.480259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.480271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.480736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.480766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.481282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.481314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.481861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.481872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.482392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.482403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.482817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.482828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.483292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.483303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.483824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.483835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.484285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.484297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.484739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.484750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.485226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.485237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.485759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.485770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.486310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.486323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.486803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.486815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.487333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.487345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.487838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.487850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.488355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.488367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.488815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.488827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.489307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.489318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.489737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.489748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.490209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.490220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.490705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.490716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.491144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.491156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.491600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.491613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.492051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.492062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.492507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.492518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.492983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.492996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.493413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.493425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.033 [2024-07-24 20:02:48.493888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.033 [2024-07-24 20:02:48.493900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.033 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.494258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.494272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.494714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.494725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.495187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.495198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.495624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.495635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.496117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.496129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.496622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.496633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.497065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.497077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.497580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.497592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.498124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.498137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.498574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.498585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.499079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.499090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.499514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.499527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.500199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.500211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.500671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.500682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.501155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.501167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.501593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.501604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.502194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.502206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.502619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.502630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.503104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.503115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.503525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.503535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.503934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.503944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.504624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.504635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.505135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.505148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.505573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.505583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.506010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.506020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.506503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.506515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.506971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.507009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.507502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.507534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.507989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.507999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.508411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.508422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.034 qpair failed and we were unable to recover it. 00:26:57.034 [2024-07-24 20:02:48.508834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.034 [2024-07-24 20:02:48.508844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.509331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.509342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.509736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.509747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.510244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.510276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.510734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.510764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.511322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.511354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.511800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.511810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.512210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.512222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.512579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.512590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.513007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.513017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.513424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.513435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.513844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.513855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.514321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.514331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.514769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.514806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.515340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.515371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.515894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.515925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.516420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.516452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.516963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.516993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.517521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.517552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.518120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.518151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.518618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.518649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.519110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.519141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.519663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.519694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.520218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.520250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.520777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.520808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.521363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.521394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.521797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.521828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.522402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.522434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.522935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.522966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.523423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.523455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.523890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.523920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.524446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.524478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.524975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.525005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.525481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.525512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.525886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.525913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.526417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.526461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.526874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.035 [2024-07-24 20:02:48.526905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.035 qpair failed and we were unable to recover it. 00:26:57.035 [2024-07-24 20:02:48.527418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.527455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.528012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.528054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.528612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.528642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.529185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.529217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.529711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.529741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.530257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.530288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.530847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.530878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.531347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.531378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.531897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.531927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.532502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.532534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.533117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.533149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.533714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.533745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.534291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.534323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.534831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.534861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.535435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.535467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.535934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.535964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.536461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.536492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.537033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.537075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.537644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.537675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.538201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.538234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.538635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.538666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.539212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.539244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.539757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.539787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.540305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.540347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.540758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.540789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.541284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.541315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.541848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.541879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.542425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.542457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.543033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.543073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.543600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.543630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.544090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.544121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.544630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.544660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.545222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.545254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.545778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.545808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.546348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.546379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.546895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.546926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.036 qpair failed and we were unable to recover it. 00:26:57.036 [2024-07-24 20:02:48.547446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.036 [2024-07-24 20:02:48.547477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.547935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.547966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.548489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.548520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.549071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.549103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.549639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.549675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.550121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.550153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.550697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.550728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.551217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.551248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.551791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.551822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.552310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.552342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.552797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.552827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.553341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.553373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.553914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.553945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.554482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.554515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.554983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.554993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.555490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.555521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.556064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.556095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.556611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.556642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.557170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.557201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.557748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.557778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.558271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.558303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.558827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.558857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.559405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.559437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.559979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.560009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.560541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.560573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.561098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.561130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.561647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.561677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.562208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.562240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.562748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.562779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.563332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.563364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.563881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.563912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.564438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.564470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.565006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.565036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.565610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.565641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.566227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.566260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.566814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.566845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.037 [2024-07-24 20:02:48.567307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.037 [2024-07-24 20:02:48.567339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.037 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.567859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.567890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.568407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.568439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.568952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.568983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.569500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.569533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.570065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.570096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.570637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.570668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.571187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.571219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.571771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.571807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.572357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.572389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.572939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.572970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.573496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.573528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.574075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.574108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.574628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.574660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.575097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.575130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.575530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.575562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.576032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.576072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.576673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.576704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.577225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.577256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.577810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.577841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.578361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.578393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.578915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.578946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.579426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.579458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.579980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.580010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.580561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.580592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.581108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.581139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.581642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.581673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.582228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.582260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.582805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.582836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.583414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.583446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.583885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.583916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.584416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-24 20:02:48.584448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.038 qpair failed and we were unable to recover it. 00:26:57.038 [2024-07-24 20:02:48.585009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.585039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.585511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.585542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.586017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.586056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.586625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.586657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.587226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.587237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.587746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.587777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.588326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.588358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.588892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.588923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.589439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.589483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.590008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.590039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.590595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.590626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.591103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.591135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.591663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.591693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.592244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.592276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.592839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.592869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.593394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.593426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.593914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.593950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.594476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.594507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.595077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.595111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.595633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.595665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.596167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.596199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.596641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.596672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.597193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.597224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.597751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.597782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.598232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.598263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.598777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.598807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.599303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.599335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.599721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.599752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.600201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.600232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.600779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.600811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.601349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.601381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.601863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.601894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.602424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.602456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.602982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.603012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.603600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.039 [2024-07-24 20:02:48.603633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.039 qpair failed and we were unable to recover it. 00:26:57.039 [2024-07-24 20:02:48.604105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.040 [2024-07-24 20:02:48.604138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.040 qpair failed and we were unable to recover it. 00:26:57.040 [2024-07-24 20:02:48.604644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.040 [2024-07-24 20:02:48.604676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.040 qpair failed and we were unable to recover it. 00:26:57.040 [2024-07-24 20:02:48.605181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.040 [2024-07-24 20:02:48.605212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.040 qpair failed and we were unable to recover it. 00:26:57.040 [2024-07-24 20:02:48.605717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.040 [2024-07-24 20:02:48.605748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.040 qpair failed and we were unable to recover it. 00:26:57.040 [2024-07-24 20:02:48.606311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.040 [2024-07-24 20:02:48.606343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.040 qpair failed and we were unable to recover it. 00:26:57.040 [2024-07-24 20:02:48.606822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.040 [2024-07-24 20:02:48.606853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.040 qpair failed and we were unable to recover it. 00:26:57.040 [2024-07-24 20:02:48.607491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.040 [2024-07-24 20:02:48.607525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.040 qpair failed and we were unable to recover it. 00:26:57.040 [2024-07-24 20:02:48.608015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.040 [2024-07-24 20:02:48.608026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.040 qpair failed and we were unable to recover it. 00:26:57.040 [2024-07-24 20:02:48.608464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.040 [2024-07-24 20:02:48.608497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.040 qpair failed and we were unable to recover it. 00:26:57.040 [2024-07-24 20:02:48.608952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.040 [2024-07-24 20:02:48.608983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.040 qpair failed and we were unable to recover it. 00:26:57.040 [2024-07-24 20:02:48.609511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.040 [2024-07-24 20:02:48.609523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.040 qpair failed and we were unable to recover it. 00:26:57.040 [2024-07-24 20:02:48.609993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.040 [2024-07-24 20:02:48.610004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.040 qpair failed and we were unable to recover it. 00:26:57.040 [2024-07-24 20:02:48.610471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-24 20:02:48.610504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-24 20:02:48.610960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.610993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.611531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.611563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.612136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.612170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.612771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.612782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.613240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.613252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.613654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.613665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.614139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.614171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.614720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.614751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.615285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.615322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.615909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.615940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.616524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.616555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.617101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.617133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.617595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.617626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.618177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.618210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.618776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.618806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.619354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.619386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.619929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.619960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.620487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.620519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.621078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.621109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.621624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.621655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.622178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.622211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.622719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.622750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.623260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.623292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.623822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.623852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.624383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.624414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.624914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.624945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.625476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.625507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.626021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.626062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.626623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.626654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.627202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.627235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.627760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.627791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.628336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.628368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.628876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.628907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.629292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.629323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.629824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.629854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-24 20:02:48.630408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-24 20:02:48.630440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.630926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.630957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.631487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.631518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.631985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.632016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.632559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.632590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.633137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.633168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.633669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.633700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.634239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.634271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.634784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.634815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.635359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.635391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.635905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.635935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.636389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.636421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.636825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.636856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.637421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.637458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.637982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.637993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.638406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.638417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.638827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.638839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.639177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.639189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.639663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.639674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.640212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.640223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.640718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.640748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.641228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.641260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.641730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.641760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.642285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.642318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.642866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.642897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.643497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.643529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.644088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.644098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.644559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.644570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.644983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.645013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.645484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.645516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.646066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.646098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.646591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.646621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.647183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.647215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.647724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.647754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.648263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.648295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.648830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.648860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.649368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-24 20:02:48.649399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-24 20:02:48.649963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.649995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.650550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.650581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.651099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.651132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.651696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.651728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.652250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.652282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.652730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.652761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.653186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.653197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.653621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.653651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.654174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.654205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.654754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.654785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.655349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.655381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.655932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.655963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.656541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.656572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.657119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.657151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.657653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.657684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.658200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.658231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.658722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.658759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.659288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.659320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.659857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.659887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.660369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.660401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.660938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.660968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.661466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.661498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.662008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.662039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.662568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.662600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.663171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.663202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.663730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.663761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.664295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.664327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.664870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.664900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.665421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.665452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.665929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.665960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.666435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.666467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.667001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.667032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.667554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.667585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.668089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.668122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.668589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.668619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.669170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-24 20:02:48.669201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-24 20:02:48.669782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.669813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.670398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.670430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.670957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.670995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.671501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.671512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.671935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.671966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.672423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.672455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.672895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.672925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.673383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.673415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.673876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.673907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.674432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.674464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.674937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.674968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.675453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.675485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.676066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.676098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.676552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.676582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.677083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.677115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.677624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.677655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.678129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.678161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.678687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.678717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.679267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.679298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.679786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.679816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.680356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.680416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.680898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.680929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.681373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.681384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.681895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.681925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.682385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.682424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.682959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.682990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.683579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.683611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.684064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.684096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.684601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.684632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.685182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.685214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.685697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.685727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.686223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.686234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.686642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.686672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.687197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.687229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.687734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.687765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.688322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.688333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-24 20:02:48.688882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-24 20:02:48.688912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.689434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.689465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.689989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.690028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.690551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.690582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.691111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.691143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.691684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.691715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.692229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.692260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.692765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.692795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.693370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.693402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.693936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.693967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.694454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.694486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.695018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.695058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.695629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.695659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.696220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.696252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.696767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.696798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.697347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.697358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.697816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.697848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.698377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.698408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.698922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.698953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.699456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.699488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.700012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.700052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.700612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.700642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.701209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.701242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.701765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.701796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.702253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.702290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.702807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.702837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.703339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.703371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.703917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.703948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.704421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.704453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-24 20:02:48.704992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-24 20:02:48.705023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.705591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.705623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.706177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.706209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.706659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.706690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.707110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.707142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.707591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.707622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.708140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.708172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.708720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.708751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.709297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.709308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.709796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.709827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.710274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.710305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.710816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.710847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.711402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.711434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.711983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.712014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.712555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.712587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.713141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.713173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.713722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.713752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.714256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.714288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.714854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.714885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.715409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.715440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.715993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.716024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.716529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.716560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.716957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.716988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.717525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.717557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.718112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.718123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.718543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.718574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.719030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.719076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.719622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.719653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.720180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.720212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.720739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.720769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.721281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.721313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.721846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.721877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.722463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.722495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.722979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.723010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.723573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-24 20:02:48.723605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-24 20:02:48.724146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.724160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.724638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.724648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.725070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.725081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.725584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.725595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.726089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.726101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.726493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.726524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.727051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.727062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.727487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.727498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.727968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.727979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.728489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.728500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.728914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.728925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.729410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.729422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.729918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.729929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.730426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.730438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.730930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.730941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.731421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.731432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.731936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.731947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.732354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.732366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.732753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.732764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.733249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.733261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.733766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.733777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.734208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.734219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.734572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.734583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.735126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.735137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.735656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.735668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.736202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.736214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.736693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.736707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.737218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.737230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.737746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.737758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.738235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.738246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.738608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.738621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.739099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.739111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.739571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.739582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.740011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.740022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.740585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.740597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-24 20:02:48.741116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-24 20:02:48.741127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.741539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.741549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.741980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.741992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.742413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.742425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.742830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.742874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.743346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.743383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.743865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.743876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.744581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.744593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.745077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.745088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.745579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.745591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.746090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.746103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.746506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.746517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.746984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.746995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.747410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.747421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.747792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.747803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.748207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.748219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.748574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.748585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.749056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.749089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.749581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.749613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.750137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.750169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.750638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.750649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.751073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.751107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.751636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.751666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.752224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.752235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.752665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.752676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.753153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.753165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.753598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.753629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.754081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.754112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.754596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.754607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.755300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.755334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.755902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.755933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.756478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.756509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.757094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.757168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.757642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.757658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.758115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.758133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.758612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.758627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.316 [2024-07-24 20:02:48.758987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.316 [2024-07-24 20:02:48.759002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.316 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.759497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.759512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.759991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.760005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.760478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.760494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.760917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.760931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.761366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.761382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.761795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.761810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.762230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.762246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.762736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.762751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.763243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.763260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.763762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.763778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.764221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.764236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.764564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.764578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.765050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.765065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.765517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.765532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.765967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.765982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.766389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.766423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.766895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.766926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.767440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.767455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.767976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.767990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.768456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.768472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.768936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.768951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.769371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.769389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.769908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.769947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.770400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.770432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.770965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.770997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.771490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.771522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.772026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.772067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.772595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.772627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.773076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.773108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.773640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.773670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.774178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.774209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.774665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.774696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.775159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.775174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.775596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.775626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.776084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.776115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.776567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.776598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.317 [2024-07-24 20:02:48.777108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.317 [2024-07-24 20:02:48.777141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.317 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.777618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.777649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.778126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.778158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.778603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.778634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.779160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.779191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.779530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.779560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.780013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.780067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.780493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.780524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.780963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.780993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.781447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.781478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.781995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.782025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.782499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.782529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.783029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.783048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.783540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.783575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.784013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.784027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.784538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.784570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.784973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.785003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.785464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.785495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.785994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.786025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.786480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.786511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.787030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.787071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.787531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.787561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.788106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.788138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.788658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.788689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.789211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.789226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.789670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.789700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.790159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.790173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.790717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.790747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.791313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.791344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.318 [2024-07-24 20:02:48.791802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.318 [2024-07-24 20:02:48.791832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.318 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.792324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.792356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.792875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.792889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.793378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.793410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.793885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.793915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.794383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.794415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.794936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.794977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.795324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.795338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.795762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.795792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.796299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.796349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.796742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.796773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.797271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.797304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.797709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.797740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.798196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.798229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.798728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.798759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.799292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.799323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.799856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.799886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.800355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.800387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.800826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.800856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.801350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.801382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.801879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.801909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.802415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.802447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.802974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.803004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.803561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.803592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.804138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.804170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.804600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.804632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.805151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.805183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.805728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.805759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.806263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.319 [2024-07-24 20:02:48.806294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.319 qpair failed and we were unable to recover it. 00:26:57.319 [2024-07-24 20:02:48.806743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.806773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.807185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.807216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.807736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.807752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.808243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.808277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.808802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.808834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.809370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.809416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.809910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.809941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.810409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.810424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.810893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.810908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.811349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.811381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.811864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.811894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.812348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.812381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.812887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.812918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.813398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.813431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.813938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.813968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.814535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.814567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.814963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.814993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.815542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.815574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.816118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.816150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.816602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.816632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.817082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.817114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.817636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.817667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.818197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.818229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.818729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.818766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.819316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.819347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.819838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.819868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.820418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.820451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.820969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.320 [2024-07-24 20:02:48.821000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.320 qpair failed and we were unable to recover it. 00:26:57.320 [2024-07-24 20:02:48.821405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.821436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.821902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.821932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.822446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.822477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.822890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.822921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.823448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.823479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.824019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.824068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.824610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.824641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.825163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.825195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.825686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.825717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.826268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.826299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.826776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.826807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.827299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.827314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.827779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.827794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.828321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.828353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.828878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.828908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.829423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.829455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.829961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.829992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.830528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.830559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.831077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.831109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.831635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.831665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.832077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.832108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.832634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.832666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.833217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.833255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.833767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.833798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.834314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.834347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.834919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.834949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.835498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.835530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.836061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.836092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.836643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.836673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.837254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.837286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.837818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.837848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.838292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.838324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.838860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.838891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.321 [2024-07-24 20:02:48.839444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.321 [2024-07-24 20:02:48.839475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.321 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.840013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.840056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.840608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.840639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.841207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.841240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.841718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.841748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.842307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.842339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.842869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.842899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.843433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.843464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.844057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.844089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.844632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.844646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.845145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.845178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.845719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.845750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.846282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.846313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.846838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.846868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.847398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.847430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.847940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.847972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.848550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.848587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.849144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.849176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.849708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.849738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.850270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.850302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.850802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.850817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.851334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.851366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.851916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.851947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.852475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.852506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.853032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.853075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.853634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.853664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.854204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.854236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.854761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.854791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.855374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.855405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.855947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.855961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.856454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.856469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.856964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.322 [2024-07-24 20:02:48.856979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.322 qpair failed and we were unable to recover it. 00:26:57.322 [2024-07-24 20:02:48.857489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.857520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.858095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.858127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.858657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.858687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.859150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.859182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.859731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.859762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.860336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.860368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.860833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.860863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.861307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.861338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.861798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.861829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.862352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.862383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.862936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.862966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.863350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.863381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.863787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.863818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.864346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.864377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.864920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.864950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.865400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.865432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.865984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.866015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.866575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.866606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.867153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.867168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.867585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.867616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.868078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.868109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.868621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.868652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.869193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.869225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.869736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.869766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.870336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.870367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.870858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.870889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.871408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.871440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.871990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.323 [2024-07-24 20:02:48.872020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.323 qpair failed and we were unable to recover it. 00:26:57.323 [2024-07-24 20:02:48.872601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.872632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.873120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.873151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.873686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.873717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.874290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.874322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.874868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.874898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.875421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.875453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.875957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.875986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.876474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.876505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.877058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.877088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.877662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.877693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.878167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.878198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.878752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.878782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.879358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.879390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.879934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.879965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.880395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.880427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.880940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.880970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.881536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.881567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.882116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.882148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.882612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.882643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.324 [2024-07-24 20:02:48.883194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.324 [2024-07-24 20:02:48.883226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.324 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.883730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.883744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.884165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.884180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.884678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.884708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.885252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.885284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.885791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.885826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.886351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.886383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.886935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.886966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.887507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.887539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.888032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.888079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.888616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.888648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.889174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.889206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.889757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.889788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.890340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.890371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.890899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.890930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.891459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.891490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.892031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.892073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.892671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.892701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.893159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.893190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.893637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.893652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.894130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.894145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.894654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.894684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.895253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.895285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.895813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.895843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.325 [2024-07-24 20:02:48.896361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.325 [2024-07-24 20:02:48.896392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.325 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.896843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.896859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.897303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.897337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.897848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.897878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.898430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.898446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.898862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.898893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.899439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.899455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.899925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.899956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.900512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.900548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.901075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.901107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.901649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.901678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.902195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.902226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.902678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.902708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.903212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.903244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.903773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.903803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.904256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.904289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.904812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.904842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.905387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.905431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.905914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.905929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.906350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.906380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.906838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.906868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.907363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.907395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-24 20:02:48.907850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-24 20:02:48.907882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.908411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.908450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.908998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.909028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.909642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.909675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.910194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.910226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.910730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.910761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.911315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.911346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.911909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.911939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.912522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.912554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.913017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.913056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.913500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.913530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.914057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.914088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.914637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.914668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.915237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.915269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.915804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.915835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.916406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.916438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.916943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.916974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.917485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.917517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.917994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.918024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.918582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.918614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.919184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.919216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.919765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.919795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.920242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.920273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.920789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.920820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.921364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.921396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.921848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.921878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.922347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.922377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.922898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.922929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.923330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.923345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.923791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.923821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.924263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.924295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.924817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.924847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.925395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.925426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.925958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.925989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.926385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.926417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.926960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.926975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-24 20:02:48.927498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-24 20:02:48.927530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.928074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.928106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.928681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.928712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.929252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.929284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.929802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.929832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.930367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.930400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.930974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.931004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.931553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.931584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.932109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.932141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.932707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.932737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.933307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.933338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.933859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.933889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.934332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.934363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.934873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.934903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.935452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.935483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.935982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.936012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.936553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.936585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.937116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.937149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.937722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.937757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.938307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.938339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.938888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.938919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.939318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.939349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.939781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.939811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.940309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.940340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.940888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.940918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.941428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.941460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.942014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.942053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.942539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.942569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.943118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.943149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.943730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.943761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.944332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.944364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.944909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.944940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.945486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.945518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.946030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.946069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.946594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.946624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.947145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.947176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-24 20:02:48.947695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-24 20:02:48.947727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.948175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.948206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.948757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.948787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.949356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.949387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.949936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.949965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.950429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.950460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.950909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.950940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.951484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.951515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.952104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.952136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.952610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.952646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.953148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.953179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.953721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.953751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.954257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.954289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.954816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.954848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.955320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.955357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.955717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.955731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.956104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.956118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.956589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.956619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.957161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.957193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.957731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.957761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.958264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.958295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.958797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.958811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.959239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.959270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.959672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.959703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.960153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.960185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.960707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.960737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.961272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.961305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.961850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.961880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.962328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.962360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.962849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.962879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.963445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.963476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.963972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.964002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.964532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.964564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.965100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.965132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.965691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.965722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.966248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.966262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.966701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.966718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-24 20:02:48.967273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-24 20:02:48.967309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.967827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.967857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.968346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.968377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.968879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.968909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.969437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.969469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.969990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.970021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.970580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.970612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.971075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.971107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.971633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.971664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.972210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.972242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.972797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.972827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.973359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.973392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.973935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.973965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.974429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.974462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.974934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.974951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.975470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.975486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.975994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.976008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.976499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.976516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.976882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.976912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.977666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.977700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.978241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.978257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.978676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.978691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.979188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.979229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.980008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.980022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.980544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.980559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.981057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.981072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.981545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.981560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.982089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.982104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.982625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.982640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.983064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.983079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.983565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.983580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.984050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.984065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.984545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.984560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.985028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.985054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.985550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.985565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.986083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.986099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.986503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.986518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-24 20:02:48.987029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-24 20:02:48.987050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.987536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.987550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.988067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.988082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.988445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.988460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.988918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.988934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.989363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.989378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.989881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.989896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.990423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.990438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.990915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.990929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.991347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.991362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.991781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.991795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.992286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.992309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.992802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.992817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.993282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.993298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.993822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.993837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.994318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.994334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.994828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.994845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.995270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.995285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.995691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.995707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.996111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.996128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.996432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.996447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.996935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.996949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.997456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.997471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.997909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.997923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.998393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.998408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.998761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.998774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.999243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.999257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:48.999674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:48.999690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:49.000157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:49.000172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-24 20:02:49.000578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-24 20:02:49.000592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.001103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.001122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.001558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.001573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.002050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.002065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.002499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.002514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.003068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.003099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.003552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.003567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.004215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.004231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.004698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.004713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.005195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.005227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.005748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.005778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.006256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.006288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.006761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.006791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.007325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.007341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.007833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.007847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.008340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.008356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.008775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.008790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.009284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.009316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.009785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.009800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.010272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.010303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.010750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.010780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.011302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.011334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.011879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.011909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.012417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.012457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.012907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.012938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.013482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.013515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.013938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.013952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.014417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.014432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.014898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.014915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.015439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.015454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.015936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.015950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.016359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.016374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.017084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.017115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.017567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.017597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.018038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.018095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.018600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.018615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.019012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-24 20:02:49.019026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-24 20:02:49.019500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.019515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.020013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.020056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.020533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.020564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.021082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.021115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.021614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.021647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.022203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.022235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.022717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.022748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.023292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.023325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.023855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.023885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.024427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.024458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.024950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.024980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.025506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.025538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.025999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.026029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.026483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.026514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.026982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.026997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.027450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.027484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.028036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.028078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.028604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.028635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.029182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.029214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.029670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.029701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.030216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.030249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.030769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.030800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.031332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.031363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.031834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.031865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.032346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.032378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.032902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.032932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.033472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.033504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.034033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.034075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.034605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.034637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.035095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.035126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.035648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.035679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.036155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.036187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.036733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.036763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.037158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.037172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.037520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.037550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.038088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.038119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.038556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-24 20:02:49.038587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-24 20:02:49.039117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.039149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.039603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.039634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.040112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.040143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.040592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.040623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.041125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.041157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.041668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.041698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.042163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.042195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.042630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.042661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.043176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.043207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.043668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.043699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.044162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.044193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.044717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.044747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.045273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.045304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.045776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.045808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.046328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.046359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.046791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.046821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.047281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.047313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.047753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.047784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.048236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.048268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.048785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.048815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.049342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.049374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.049818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.049849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.050232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.050268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.050715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.050746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.051178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.051209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.051469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.051500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.051948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.051978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.052423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.052454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.052951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.052981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.053518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.053550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.054014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.054053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.054492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.054523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.054900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.054931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.055447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.055478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.055978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.056009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.056460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.056491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.056940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-24 20:02:49.056971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-24 20:02:49.057475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.057507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.057955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.057986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.058438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.058469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.058938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.058968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.059489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.059521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.059962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.059993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.060444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.060475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.060852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.060882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.061406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.061438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.061956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.061987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.062506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.062538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.062986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.063016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.063562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.063599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.064094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.064125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.064644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.064675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.065156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.065188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.065710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.065740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.066262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.066294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.066738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.066769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.067213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.067244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.067765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.067796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.068232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.068263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.068721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.068751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.069217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.069249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.069746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.069776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.070291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.070322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.070773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.070803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.071244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.071276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.071790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.071821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.072088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.072119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.072483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.072513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.072953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.072984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.073425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.073456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.073963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.073994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.074480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-24 20:02:49.074495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-24 20:02:49.074951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.074965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.075391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.075423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.075936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.075966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.076410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.076442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.076932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.076968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.077484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.077515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.078025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.078066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.078493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.078523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.078962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.078998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.079428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.079443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.079929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.079959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.080460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.080491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.080821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.080851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.081293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.081325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.081750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.081781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.082040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.082061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.082545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.082575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.083004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.083035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.083490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.083522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.084009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.084039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.084562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.084592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.085082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.085114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.085632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.085662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.086128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.086143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.086624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.086655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.087023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.087062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.087577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.087607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.088121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.088152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.088710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.088741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.089257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.089289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.089814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.089827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.090286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.090300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.090786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-24 20:02:49.090817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-24 20:02:49.091315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.091346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.091785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.091815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.092324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.092338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.092746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.092776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.093212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.093244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.093706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.093736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.094156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.094171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.094636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.094666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.095179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.095211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.095727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.095757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.096270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.096311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.096824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.096854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.097305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.097337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.097868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.097898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.098427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.098459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.098895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.098926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.099418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.099449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.099964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.099995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.100514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.100544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.100976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.101015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.101435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.101450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.101858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.101888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.102325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.102356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.102870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.102900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.103391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.103422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.103869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.103898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.104349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.104364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.104794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.104824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.105339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.105370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.105858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.105888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.106411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.106443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.106951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.106981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.107418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.107449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.107958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.107988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.108510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.108541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.108982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.109013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-24 20:02:49.109554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-24 20:02:49.109569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.110049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.110064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.110286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.110300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.110795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.110811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.111271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.111303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.111741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.111771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.112259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.112293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.112810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.112841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.113222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.113254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.113741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.113771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.114281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.114312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.114752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.114783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.115223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.115255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.115679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.115708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.116134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.116165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.116675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.116706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.117195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.117227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.117720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.117751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.118201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.118232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.118671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.118701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.119178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.119193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.119649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.119679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.120187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.120202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.120606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.120636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.120881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.120911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.121341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.121373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.121796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.121827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.122357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.122389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.122900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.122931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.123370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.123401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.123880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.123919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.124407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.124438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.124898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.124928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.125364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.125395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.125901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.125931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.126446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.126477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.126848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.126878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-24 20:02:49.127391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-24 20:02:49.127422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.127848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.127878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.128129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.128160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.128618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.128648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.129137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.129168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.129662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.129692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.130142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.130156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.130622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.130636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.130962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.130976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.131458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.131490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.131939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.131969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.132338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.132369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.132749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.132780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.133291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.133322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.133692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.133722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.134088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.134103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.134582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.134612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.135064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.135096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.135588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.135618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.136122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.136153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.136601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.136631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.137081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.137113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.137620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.137650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.138079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.138111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.138621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.138651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.139161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.139192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.139626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.139656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.140062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.140093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.140472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.140502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.140936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.140966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.141393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.141425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.141930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.141960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.142376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.142408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.142920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.142950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.143470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.143502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.143880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.143911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.144323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.144337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-24 20:02:49.144793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-24 20:02:49.144823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.145197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.145228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.145673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.145703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.146207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.146221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.146628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.146657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.147094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.147126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.147610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.147640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.148107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.148137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.148588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.148618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.149107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.149139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.149650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.149680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.150169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.150206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.150561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.150592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.151098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.151128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.151560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.151591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.151976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.152006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.152491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.152505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.152981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.153011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.153442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.153473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.153850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.153881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.154408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.154439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.154894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.154925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.155437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.155468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.155993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.156024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.156556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.156592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.157009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.157039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.157496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.157526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.157954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.157984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.158422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.158453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.158884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.158914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-24 20:02:49.159435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-24 20:02:49.159449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.159850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.159880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.160367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.160397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.160832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.160862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.161371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.161402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.161647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.161677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.162160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.162190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.162697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.162728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.163166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.163198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.163684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.163714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.164197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.164228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.164714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.164745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.165231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.165263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.165759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.165789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.166238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.166269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.166727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.166757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.167262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.167294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.167806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.167835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.168343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.168374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.168706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.168736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.169222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.169254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.169735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.169770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.170255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.170286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.170794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.170824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.171207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.171238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.171696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.171727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.172098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.172129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.172546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.172560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.173005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.173019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.173507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.173538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.173958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.173995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.174416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.174430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.174826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-24 20:02:49.174840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-24 20:02:49.175261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-24 20:02:49.175275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-24 20:02:49.175680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-24 20:02:49.175722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-24 20:02:49.176158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-24 20:02:49.176190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-24 20:02:49.176612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-24 20:02:49.176626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-24 20:02:49.177092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-24 20:02:49.177106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-24 20:02:49.177585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-24 20:02:49.177615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-24 20:02:49.178064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-24 20:02:49.178095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-24 20:02:49.178613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-24 20:02:49.178643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-24 20:02:49.179122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-24 20:02:49.179137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-24 20:02:49.179605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-24 20:02:49.179619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-24 20:02:49.180072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-24 20:02:49.180103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-24 20:02:49.180349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-24 20:02:49.180379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-24 20:02:49.180834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-24 20:02:49.180864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-24 20:02:49.181369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-24 20:02:49.181400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-24 20:02:49.181843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-24 20:02:49.181874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.880 [2024-07-24 20:02:49.182360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.880 [2024-07-24 20:02:49.182399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.880 qpair failed and we were unable to recover it. 00:26:57.880 [2024-07-24 20:02:49.182912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.880 [2024-07-24 20:02:49.182941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.880 qpair failed and we were unable to recover it. 00:26:57.880 [2024-07-24 20:02:49.183429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.880 [2024-07-24 20:02:49.183461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.880 qpair failed and we were unable to recover it. 00:26:57.880 [2024-07-24 20:02:49.183970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.880 [2024-07-24 20:02:49.184001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.184460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.184495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.184932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.184946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.185380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.185412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.185943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.185973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.186478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.186509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.186704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.186734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.187237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.187269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.187697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.187727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.188143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.188174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.188684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.188714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.189179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.189193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.189539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.189569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.190023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.190061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.190517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.190547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.190989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.191019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.191466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.191497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.191936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.191966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.192345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.192376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.192796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.192826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.193337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.193369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.193851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.193881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.194327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.194357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.194843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.194873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.195378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.195409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.195845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.195876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.196332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.196375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.196777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.196807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.197320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.197351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.197794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.197824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.198308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.198339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.198812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.198843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.199285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.199315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.199745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.199759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.200184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.200226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.200420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.200451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.881 [2024-07-24 20:02:49.200898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.881 [2024-07-24 20:02:49.200937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.881 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.201323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.201337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.201809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.201850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.202336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.202367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.202822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.202852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.203353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.203384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.203838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.203868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.204322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.204354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.204847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.204877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.205329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.205369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.205822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.205852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.206372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.206402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.206905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.206935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.207422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.207453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.207901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.207931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.208364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.208395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.208887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.208917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.209403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.209435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.209834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.209864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.210306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.210320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.210729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.210743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.211150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.211164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.211634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.211648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.212126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.212158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.212593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.212623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.213133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.213164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.213674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.213704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.214192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.214223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.214739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.214769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.215200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.215237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.215687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.215717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.216159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.216191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.216701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.216731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.217216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.217247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.217737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.217766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.218192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.218224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.218543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.882 [2024-07-24 20:02:49.218556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.882 qpair failed and we were unable to recover it. 00:26:57.882 [2024-07-24 20:02:49.218970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.218984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.219410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.219442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.219858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.219888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.220325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.220356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.220793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.220823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.221254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.221285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.221817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.221847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.222357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.222388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.222924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.222954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.223437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.223468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.223892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.223936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.224339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.224370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.224880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.224911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.225371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.225413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.225866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.225879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.226218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.226232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.226633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.226646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.227087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.227101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.227517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.227547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.228037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.228061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.228509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.228523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.228916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.228945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.229455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.229489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.229887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.229917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.230416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.230431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.230837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.230850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.231261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.231275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.231751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.231781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.232212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.232243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.232674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.232687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.233135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.233150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.233557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.233587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.234090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.234122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.234655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.234685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.235016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.235053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.235474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.235504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.883 [2024-07-24 20:02:49.235986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.883 [2024-07-24 20:02:49.236016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.883 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.236521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.236552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.237036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.237078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.237567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.237580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.237974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.237988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.238427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.238441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.238929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.238943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.239391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.239405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.239825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.239839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.240294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.240325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.240754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.240784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.241224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.241255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.241745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.241759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.242175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.242189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.242661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.242675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.243072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.243103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.243590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.243620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.244062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.244092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.244523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.244537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.244920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.244934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.245326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.245341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.245789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.245803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.246302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.246316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.246739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.246752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.247216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.247231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.247583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.247613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.248118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.248150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.248632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.248646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.249121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.249136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.249627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.249641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.250063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.250077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.250479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.250492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.250966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.250980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.251389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.251403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.251826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.251840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.252313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.252328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.252781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.252794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.884 qpair failed and we were unable to recover it. 00:26:57.884 [2024-07-24 20:02:49.253264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.884 [2024-07-24 20:02:49.253278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.253729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.253742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.254218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.254233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.254633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.254647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.255069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.255083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.255469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.255483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.255690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.255704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.256117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.256131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.256528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.256542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.256960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.256974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.257422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.257436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.257833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.257847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.258086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.258100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.258453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.258496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.259012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.259057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.259480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.259494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.259825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.259838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.260237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.260251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.260596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.260610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.260991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.261005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.261401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.261416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.261837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.261850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.262199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.262214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.262686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.262700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.263110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.263141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.263656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.263670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.264121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.264136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.264585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.264599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.265087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.265101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.885 qpair failed and we were unable to recover it. 00:26:57.885 [2024-07-24 20:02:49.265574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.885 [2024-07-24 20:02:49.265588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.266010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.266039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.266540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.266555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.267054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.267069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.267469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.267483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.267884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.267897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.268292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.268306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.268595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.268609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.269094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.269126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.269634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.269664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.270038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.270062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.270520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.270534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.270941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.270957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.271306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.271320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.271602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.271616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.272115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.272129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.272619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.272633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.272987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.273017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.273523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.273554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.273984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.274013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.274315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.274347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.274729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.274742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.275189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.275203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.275605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.275635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.276066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.276097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.276525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.276539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.277009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.277023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.277436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.277450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.277855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.277885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.278308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.278338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.278766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.278797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.279281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.279312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.279800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.279830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.280301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.280315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.280769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.280783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.281107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.281121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.281441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.886 [2024-07-24 20:02:49.281455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.886 qpair failed and we were unable to recover it. 00:26:57.886 [2024-07-24 20:02:49.281882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.281912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.282370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.282400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.282790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.282825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.283334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.283365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.283808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.283822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.284271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.284285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.284678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.284692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.285096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.285127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.285633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.285663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.286173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.286204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.286623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.286653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.287087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.287119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.287544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.287574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.287994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.288024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.288563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.288594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.289104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.289135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.289627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.289658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.289902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.289933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.290239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.290270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.290779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.290809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.291299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.291330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.291831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.291845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.292187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.292201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.292680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.292710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.293169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.293200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.293618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.293648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.294098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.294129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.294563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.294599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.295071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.295086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.295539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.295569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.296004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.296034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.296530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.296560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.296997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.297027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.297546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.297577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.298023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.298070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.298523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.887 [2024-07-24 20:02:49.298553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.887 qpair failed and we were unable to recover it. 00:26:57.887 [2024-07-24 20:02:49.298976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.299006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.299517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.299548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.300034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.300075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.300449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.300478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.300997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.301027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.301477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.301508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.301971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.302000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.302452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.302484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.303037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.303079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.303514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.303544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.303974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.304004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.304371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.304411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.304847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.304877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.305313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.305345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.305854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.305883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.306368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.306400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.306826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.306840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.307344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.307375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.307813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.307843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.308329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.308360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.308865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.308895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.309410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.309442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.309878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.309908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.310392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.310423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.310907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.310937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.311421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.311452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.311957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.311987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.312506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.312545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.313023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.313063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.313571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.313601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.314084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.314116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.314578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.314608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.315064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.315095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.315583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.315626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.316074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.316093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.316497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.316527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.316759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.888 [2024-07-24 20:02:49.316789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.888 qpair failed and we were unable to recover it. 00:26:57.888 [2024-07-24 20:02:49.317271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.317302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.317754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.317784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.318211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.318242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.318609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.318639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.318915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.318945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.319431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.319461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.319896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.319909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.320388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.320419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.320925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.320955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.321465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.321496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.321888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.321919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.322353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.322385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.322804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.322835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.323368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.323399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.323885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.323914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.324349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.324380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.324819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.324850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.325359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.325390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.325838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.325868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.326379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.326410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.326846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.326877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.327315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.327346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.327785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.327798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.328277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.328308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.328678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.328712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.329144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.329175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.329608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.329638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.330075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.330107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.330620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.330650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.331160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.331191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.331557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.331588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.332074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.332105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.332347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.332377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.332825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.332855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.333340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.333371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.333769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.333799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.334309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.889 [2024-07-24 20:02:49.334340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.889 qpair failed and we were unable to recover it. 00:26:57.889 [2024-07-24 20:02:49.334766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.334796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.335236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.335267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.335713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.335743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.336250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.336280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.336724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.336754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.337207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.337238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.337745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.337774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.338206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.338237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.338674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.338704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.338949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.338963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.339372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.339403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.339842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.339872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.340376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.340406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.340821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.340835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.341250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.341265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.341741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.341771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.342207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.342238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.342674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.342704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.343131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.343162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.343584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.343614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.343928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.343958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.344565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.344596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.345034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.345073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.345462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.345493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.345998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.346012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.346489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.346520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.347030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.347071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.347504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.347534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.348013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.348051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.348572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.348602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.349088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.349120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.890 qpair failed and we were unable to recover it. 00:26:57.890 [2024-07-24 20:02:49.349573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.890 [2024-07-24 20:02:49.349603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.350113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.350144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.350523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.350553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.351036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.351085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.351519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.351548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.352054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.352068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.352463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.352493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.352930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.352959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.353442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.353473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.353931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.353961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.354391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.354423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.354865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.354895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.355320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.355351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.355836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.355865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.356289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.356320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.356736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.356749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.357237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.357268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.357714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.357744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.358178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.358192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.358523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.358537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.358924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.358938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.359334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.359365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.359736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.359767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.360249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.360280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.360813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.360849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.361224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.361255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.361740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.361769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.362189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.362221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.362711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.362741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.363104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.363135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.363562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.363591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.364072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.364086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.364425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.364438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.364910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.364923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.365164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.365178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.365583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.365597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.366062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.891 [2024-07-24 20:02:49.366094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.891 qpair failed and we were unable to recover it. 00:26:57.891 [2024-07-24 20:02:49.366527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.366557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.366924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.366956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.367392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.367423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.367929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.367959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.368329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.368360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.368600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.368614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.369116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.369147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.369574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.369604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.369966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.369996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.370388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.370419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.370928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.370959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.371444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.371476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.371827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.371857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.372340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.372371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.372884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.372904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.373385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.373417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.373938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.373968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.374514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.374545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.374979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.375009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.375466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.375497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.375938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.375968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.376471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.376503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.377017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.377058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.377545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.377575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.378083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.378114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.378555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.378585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.379023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.379062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.379439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.379469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.379985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.380015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.380513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.380544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.380982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.381013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.381527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.381541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.382008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.382038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.382578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.382608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.382880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.382910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.383348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.383380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.892 qpair failed and we were unable to recover it. 00:26:57.892 [2024-07-24 20:02:49.383809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.892 [2024-07-24 20:02:49.383822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.384207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.384221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.384582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.384612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.385140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.385171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.385684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.385715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.386196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.386213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.386627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.386641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.387097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.387128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.387637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.387667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.388177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.388208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.388626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.388655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.389141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.389172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.389604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.389641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.390031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.390069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.390581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.390611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.391097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.391128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.391636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.391673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.392099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.392133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.392569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.392599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.393116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.393130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.393464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.393478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.393879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.393909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.394332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.394364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.394867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.394881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.395363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.395395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.395675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.395704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.396142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.396173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.396604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.396634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.397073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.397104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.397529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.397559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.398094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.398124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.398564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.398594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.398975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.399005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.399448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.399480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.399909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.399939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.400365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.400396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.400770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.893 [2024-07-24 20:02:49.400800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.893 qpair failed and we were unable to recover it. 00:26:57.893 [2024-07-24 20:02:49.401288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.401319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.401513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.401543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.402030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.402070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.402584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.402614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.403099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.403130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.403638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.403669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.404183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.404214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.404725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.404755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.405177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.405208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.405699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.405730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.406218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.406249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.406763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.406793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.407297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.407328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.407813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.407843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.408358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.408375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.408829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.408843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.409320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.409335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.409668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.409682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.410138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.410170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.410633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.410663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.411166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.411196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.411677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.411714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.412167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.412181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.412661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.412691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.413152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.413183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.413606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.413636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.414070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.414101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.414632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.414662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.415092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.415106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.415511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.415540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.415914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.415944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.416384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.416415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.416900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.416913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.417324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.417355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.417838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.417868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.418341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.418355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.418791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.894 [2024-07-24 20:02:49.418808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.894 qpair failed and we were unable to recover it. 00:26:57.894 [2024-07-24 20:02:49.419099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.419113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.419538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.419568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.420076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.420107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.420594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.420624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.421115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.421146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.421651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.421681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.422200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.422231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.422669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.422699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.423139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.423153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.423624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.423638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.423851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.423882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.424313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.424344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.424868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.424898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.425362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.425394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.425883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.425914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.426425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.426457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.426651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.426680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.427190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.427221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.427730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.427759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.428246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.428278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.428720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.428750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.429236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.429267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.429510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.429540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.430059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.430090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.430517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.430546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.431035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.431074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.431314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.431349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.431809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.431839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.432347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.432379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.432865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.432896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.433321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.895 [2024-07-24 20:02:49.433353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.895 qpair failed and we were unable to recover it. 00:26:57.895 [2024-07-24 20:02:49.433734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.433765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.434251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.434282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.434658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.434689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.435218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.435249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.435745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.435775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.436282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.436313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.436816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.436846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.437355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.437386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.437875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.437905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.438351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.438382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.438751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.438782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.439287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.439318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.439765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.439795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.440232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.440264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.440632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.440662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.441171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.441202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.441727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.441757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.442217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.442248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.442745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.442775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.443168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.443199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.443434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.443448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.443848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.443878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.444294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.444325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.444792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.444822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.445257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.445289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.445772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.445803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.446257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.446287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.446793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.446831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.447247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.447261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.447615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.447645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.448130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.448161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.448401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.448432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.448917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.448947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.449370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.449385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.449724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.449738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.450235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.450266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.896 [2024-07-24 20:02:49.450710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.896 [2024-07-24 20:02:49.450741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.896 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.451165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.451196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.451625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.451656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.452095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.452127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.452639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.452669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.453155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.453188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.453450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.453480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.453926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.453957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.454387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.454419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.454906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.454936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.455330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.455361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.455775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.455805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.456178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.456193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.456608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.456622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.457010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.457024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.457365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.457380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.457805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.457835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.458256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.458287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.458653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.458683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.459136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.459167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.459695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.459725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.460228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.460259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.460687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.460717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.461151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.461182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.461664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.461694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.462205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.462220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.462642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.462674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.463051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.463069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.463477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.463507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.463994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.464025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.464476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.464507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.464778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.464808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.465175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.465206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.465641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.465671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.466107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.466138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.466526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.466557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:57.897 [2024-07-24 20:02:49.466991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.897 [2024-07-24 20:02:49.467021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:57.897 qpair failed and we were unable to recover it. 00:26:58.168 [2024-07-24 20:02:49.467461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.168 [2024-07-24 20:02:49.467494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.168 qpair failed and we were unable to recover it. 00:26:58.168 [2024-07-24 20:02:49.467926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.168 [2024-07-24 20:02:49.467957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.168 qpair failed and we were unable to recover it. 00:26:58.168 [2024-07-24 20:02:49.468385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.168 [2024-07-24 20:02:49.468415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.168 qpair failed and we were unable to recover it. 00:26:58.168 [2024-07-24 20:02:49.468769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.168 [2024-07-24 20:02:49.468784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.168 qpair failed and we were unable to recover it. 00:26:58.168 [2024-07-24 20:02:49.469274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.168 [2024-07-24 20:02:49.469305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.168 qpair failed and we were unable to recover it. 00:26:58.168 [2024-07-24 20:02:49.469549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.168 [2024-07-24 20:02:49.469579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.168 qpair failed and we were unable to recover it. 00:26:58.168 [2024-07-24 20:02:49.469993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.168 [2024-07-24 20:02:49.470023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.168 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.470466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.470496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.470803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.470833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.471262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.471293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.471713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.471744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.472231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.472262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.472677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.472707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.473215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.473246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.473621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.473651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.474122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.474153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.474532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.474562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.474928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.474964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.475434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.475465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.475832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.475863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.476242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.476273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.476461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.476492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.476918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.476933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.477280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.477295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.477747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.477761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.478103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.478118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.478512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.478526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.478921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.478935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.479351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.479366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.479714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.479727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.480059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.480073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.480465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.480479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.480915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.480929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.481328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.481342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.481790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.481803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.482210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.482225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.482650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.482664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.482998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.483012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.483153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.483167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.483587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.483600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.483916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.483929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.484337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.484351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.169 [2024-07-24 20:02:49.484756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.169 [2024-07-24 20:02:49.484770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.169 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.485108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.485122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.485541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.485557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.485890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.485903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.486357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.486372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.486821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.486835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.487228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.487243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.487502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.487516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.487934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.487947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.488112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.488127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.488527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.488541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.488933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.488947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.489341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.489356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.489773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.489787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.490253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.490267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.490826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.490839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.491189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.491204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.491623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.491636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.491968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.491982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.492457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.492471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.492863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.492876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.493213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.493228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.493637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.493651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.494100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.494114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.494450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.494465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.494867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.494881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.495223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.495237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.495652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.495666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.495813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.495826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.496238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.496252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.496664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.496677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.170 [2024-07-24 20:02:49.497151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.170 [2024-07-24 20:02:49.497165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.170 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.497595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.497609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.498084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.498098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.498507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.498521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.498925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.498938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.499339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.499353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.499754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.499767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.500258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.500272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.500612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.500626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.501033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.501054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.501466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.501480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.501824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.501838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.502168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.502189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.502585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.502599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.503064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.503078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.503480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.503494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.503894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.503908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.504313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.504327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.504670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.504684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.505017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.505031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.505424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.505438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.505886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.505900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.506398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.506413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.506823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.506837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.507238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.507253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.507652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.507666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.508055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.508071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.508553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.508567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.508895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.508909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.509364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.509378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.509707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.509721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.510117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.510133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.510530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.510546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.511029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.511050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.511446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.511465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.511854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.511869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.171 qpair failed and we were unable to recover it. 00:26:58.171 [2024-07-24 20:02:49.512276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.171 [2024-07-24 20:02:49.512291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.512741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.512755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.513104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.513118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.513509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.513526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.513881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.513895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.514238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.514252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.514648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.514662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.515069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.515084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.515412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.515426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.515825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.515839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.516171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.516186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.516650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.516665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.517138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.517153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.517555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.517569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.517913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.517926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.518327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.518341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.518843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.518857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.519203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.519217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.519557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.519570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.519923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.519937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.520438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.520452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.520810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.520824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.521159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.521173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.521600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.521614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.522009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.522023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.522416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.522431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.522898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.522912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.523329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.523344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.523735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.523749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.524216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.524230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.524655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.524671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.525007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.525020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.525357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.525371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.525719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.525733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.526123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.172 [2024-07-24 20:02:49.526137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.172 qpair failed and we were unable to recover it. 00:26:58.172 [2024-07-24 20:02:49.526538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.526552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.526885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.526899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.527301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.527315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.527822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.527836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.528175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.528189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.528589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.528603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.528991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.529005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.529423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.529437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.529818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.529831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.530263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.530278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.530679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.530693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.530924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.530937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.531326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.531340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.531751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.531765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.532162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.532176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.532642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.532656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.533051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.533065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.533541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.533555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.534029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.534047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.534430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.534443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.534848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.534862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.535185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.535200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.535590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.535604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.535994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.536007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.536409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.536424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.536829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.536842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.537248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.537261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.173 [2024-07-24 20:02:49.537739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.173 [2024-07-24 20:02:49.537753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.173 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.538162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.538176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.538627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.538641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.539040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.539059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.539455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.539469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.539812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.539826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.540223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.540237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.540713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.540727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.541147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.541161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.541514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.541528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.541920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.541934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.542352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.542366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.542775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.542788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.543191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.543205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.543555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.543569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.544018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.544032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.544488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.544502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.544974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.544988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.545390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.545404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.545866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.545880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.546303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.546317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.546713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.546727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.547114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.547127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.547463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.547477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.547862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.547876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.548264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.548278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.548733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.548747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.549169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.549183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.549513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.549527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.549913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.549926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.550262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.550276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.550767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.550781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.551187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.551202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.551601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.551615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.552088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.552102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.552488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.552502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.174 [2024-07-24 20:02:49.552901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.174 [2024-07-24 20:02:49.552917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.174 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.553367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.553381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.553795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.553809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.554298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.554313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.554727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.554740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.555087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.555118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.555602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.555632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.556142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.556173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.556596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.556627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.557003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.557033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.557567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.557598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.558066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.558098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.558602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.558633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.558826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.558856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.559217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.559232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.559687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.559718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.560141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.560155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.560490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.560520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.560951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.560981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.561227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.561241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.561574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.561617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.562076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.562107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.562612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.562643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.563024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.563067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.563392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.563423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.563934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.563964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.564400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.564431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.564860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.564904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.565370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.565401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.565830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.565860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.566290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.566321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.566697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.566727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.567121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.567151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.567533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.567562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.568071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.568102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.568616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.568630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.569030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.175 [2024-07-24 20:02:49.569050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.175 qpair failed and we were unable to recover it. 00:26:58.175 [2024-07-24 20:02:49.569462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.569492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.569880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.569910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.570344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.570376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.570887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.570918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.571414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.571446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.571872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.571901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.572339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.572371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.572809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.572839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.573269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.573300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.573727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.573756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.574128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.574160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.574590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.574620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.575061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.575092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.575534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.575564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.575948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.575978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.576402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.576434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.576682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.576712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.577223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.577259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.577708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.577738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.578162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.578193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.578558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.578588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.578960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.578990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.579440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.579471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.579962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.579992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.580192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.580223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.580592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.580621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.581064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.581095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.581579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.581610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.582095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.582126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.582505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.582536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.583067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.583098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.583485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.583515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.584124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.584156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.584596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.584626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.585065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.176 [2024-07-24 20:02:49.585096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.176 qpair failed and we were unable to recover it. 00:26:58.176 [2024-07-24 20:02:49.585610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.585640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.586074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.586105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.586480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.586494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.586879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.586893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.587307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.587338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.587759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.587789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.588228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.588258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.588761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.588776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.589135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.589166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.589597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.589626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.590013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.590052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.590484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.590514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.590884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.590915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.591350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.591381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.591743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.591773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.592198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.592229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.592598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.592628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.593008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.593039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.593548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.593579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.593947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.593978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.594495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.594527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.594958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.594988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.595425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.595456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.595810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.595845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.596329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.596360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.596846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.596884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.597040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.597059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.597418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.597448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.597830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.597860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.598290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.598321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.598685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.598715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.599101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.599133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.599603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.599633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.600004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.600035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.600581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.177 [2024-07-24 20:02:49.600596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.177 qpair failed and we were unable to recover it. 00:26:58.177 [2024-07-24 20:02:49.600992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.601023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.601472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.601503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.601983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.602015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.602553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.602584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.603072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.603103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.603490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.603519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.603961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.603991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.604434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.604465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.604845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.604875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.605242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.605273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.605645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.605675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.606158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.606189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.606450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.606481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.606779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.606809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.607225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.607258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.607624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.607660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.608097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.608128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.608617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.608647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.609018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.609057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.609499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.609529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.609960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.609990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.610484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.610515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.610943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.610974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.611415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.611446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.611876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.611906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.612157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.612171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.612565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.612578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.612903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.612917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.613334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.613364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.613820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.178 [2024-07-24 20:02:49.613850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.178 qpair failed and we were unable to recover it. 00:26:58.178 [2024-07-24 20:02:49.614382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.614413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.614612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.614642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.615083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.615115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.615606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.615635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.616019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.616059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.616541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.616574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.617009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.617040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.617482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.617513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.617935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.617966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.618402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.618433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.618899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.618913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.619332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.619363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.619826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.619862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.620252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.620283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.620718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.620748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.621178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.621208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.621574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.621604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.622061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.622093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.622280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.622294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.622692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.622722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.623178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.623192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.623548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.623562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.624037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.624059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.624524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.624554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.624955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.624985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.625346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.625361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.625690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.625705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.626036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.626076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.626513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.626543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.627030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.627074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.627336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.627365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.179 [2024-07-24 20:02:49.627673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.179 [2024-07-24 20:02:49.627703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.179 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.628125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.628157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.628587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.628617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.628984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.629014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.629448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.629479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.629852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.629882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.630257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.630288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.630743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.630774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.631151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.631182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.631572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.631603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.632062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.632094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.632353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.632383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.632809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.632840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.633285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.633317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.633828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.633857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.634252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.634283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.634803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.634834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.635213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.635244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.635672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.635702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.636095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.636126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.636561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.636591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.637091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.637123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.637498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.637528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.637899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.637930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.638420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.638452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.638814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.638844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.639225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.639256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.639682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.639711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.640194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.640226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.640804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.640834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.641275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.641307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.641687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.641718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.641899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.641929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.642331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.642362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.642786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.642816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.180 [2024-07-24 20:02:49.643322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.180 [2024-07-24 20:02:49.643365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.180 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.643756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.643770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.644115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.644129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.644596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.644626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.644997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.645027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.645458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.645489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.645863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.645893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.646325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.646356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.646779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.646808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.647188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.647220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.647594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.647608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.648012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.648026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.648383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.648415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.648901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.648932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.649300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.649336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.649759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.649773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.650176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.650190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.650606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.650620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.651020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.651060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.651438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.651468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.652101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.652132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.652508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.652538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.652907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.652938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.653373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.653404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.653778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.653791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.654204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.654235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.654611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.654625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.656031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.656067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.656457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.656472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.656934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.656965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.657372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.657403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.657856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.657895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.658330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.658362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.658760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.658774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.659113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.659127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.659467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.181 [2024-07-24 20:02:49.659481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.181 qpair failed and we were unable to recover it. 00:26:58.181 [2024-07-24 20:02:49.659824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.659838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.660201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.660232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.660587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.660617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.660810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.660840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.661203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.661234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.661660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.661696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.662187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.662218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.662656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.662686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.663131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.663145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.663543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.663574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.663947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.663976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.664406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.664420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.664766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.664780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.665187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.665201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.665598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.665612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.665944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.665958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.666326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.666340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.666688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.666718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.667151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.667181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.667614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.667645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.668152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.668182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.668550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.668580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.668938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.668968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.669335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.669367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.669749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.669779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.670161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.670193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.670576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.670606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.671034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.671089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.671447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.671478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.671840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.671870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.672232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.672263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.672660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.672675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.673038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.673084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.673571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.673601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.674030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.182 [2024-07-24 20:02:49.674073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.182 qpair failed and we were unable to recover it. 00:26:58.182 [2024-07-24 20:02:49.674448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.674462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.674869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.674898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.675378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.675409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.675759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.675772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.676146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.676177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.676539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.676552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.676955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.676984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.677360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.677391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.677740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.677753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.678138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.678153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.678310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.678323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.678658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.678688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.679114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.679145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.679764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.679793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.680158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.680189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.680559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.680589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.681031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.681072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.681491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.681521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.681904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.681917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.682256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.682286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.682652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.682682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.683065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.683096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.683463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.683492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.683934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.683964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.684451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.684482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.684861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.684891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.685274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.685306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.686371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.686396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.686736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.686751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.686963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.686980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.687326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.687355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.687717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.183 [2024-07-24 20:02:49.687742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.183 qpair failed and we were unable to recover it. 00:26:58.183 [2024-07-24 20:02:49.688085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.688104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.688512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.688525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.688924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.688937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.689335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.689349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.689493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.689506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.689897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.689910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.690361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.690379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.690705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.690719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.691111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.691126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.691462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.691476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.691812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.691827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.692211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.692227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.692686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.692701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.693038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.693061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.693459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.693472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.693793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.693807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.694166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.694181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.694592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.694606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.694932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.694945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.695335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.695350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.695831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.695845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.696239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.696254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.696707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.696720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.697168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.697182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.697566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.697581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.698057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.698071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.698461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.698475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.698880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.698910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.699351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.699383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.699804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.699817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.700236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.700252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.700589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.700603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.184 qpair failed and we were unable to recover it. 00:26:58.184 [2024-07-24 20:02:49.701060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.184 [2024-07-24 20:02:49.701075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.701406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.701422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.701812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.701825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.702240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.702255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.702732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.702746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.703197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.703212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.703554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.703568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.703979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.703992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.704398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.704412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.704755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.704769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.705218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.705232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.705635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.705648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.706070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.706084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.706511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.706525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.706919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.706932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.707319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.707334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.707782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.707796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.708208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.708223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.708632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.708645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.709094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.709108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.709512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.709525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.709928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.709942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.710402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.710416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.710820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.710833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.711235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.711249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.711639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.711653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.712052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.712066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.712396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.712410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.712882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.712898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.713310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.713324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.713778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.713791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.714186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.714200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.714532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.714545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.715038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.715058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.185 [2024-07-24 20:02:49.715394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.185 [2024-07-24 20:02:49.715407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.185 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.715825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.715839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.716224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.716239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.716727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.716741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.717087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.717101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.717657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.717672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2206064 Killed "${NVMF_APP[@]}" "$@" 00:26:58.186 [2024-07-24 20:02:49.718062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.718079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.718469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 20:02:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:58.186 [2024-07-24 20:02:49.718484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 20:02:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:58.186 [2024-07-24 20:02:49.718943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.718959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 20:02:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:58.186 [2024-07-24 20:02:49.719346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.719361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 20:02:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:58.186 20:02:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:58.186 [2024-07-24 20:02:49.719775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.719790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.720263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.720278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.720887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.720901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.721352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.721367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.721602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.721616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.722090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.722104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.722454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.722468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.722953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.722967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.723318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.723334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.723732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.723746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.724149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.724163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.724642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.724656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.725129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.725144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.725566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.725581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 [2024-07-24 20:02:49.725911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.725924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 20:02:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2206914 00:26:58.186 [2024-07-24 20:02:49.726479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.726496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 20:02:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2206914 00:26:58.186 [2024-07-24 20:02:49.726844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.726860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 20:02:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:58.186 20:02:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2206914 ']' 00:26:58.186 [2024-07-24 20:02:49.727311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.727327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 20:02:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.186 20:02:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:58.186 [2024-07-24 20:02:49.727777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.727793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.186 20:02:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.186 [2024-07-24 20:02:49.728415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.186 [2024-07-24 20:02:49.728432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.186 qpair failed and we were unable to recover it. 00:26:58.187 20:02:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:58.187 20:02:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:58.187 [2024-07-24 20:02:49.728942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.728958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.729255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.729271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.729671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.729685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.730149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.730163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.730617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.730632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.731112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.731127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.731597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.731612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.732010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.732025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.732379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.732394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.732846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.732860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.733253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.733267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.733619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.733633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.733970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.733985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.734432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.734446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.734920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.734934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.735278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.735293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.735693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.735708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.736108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.736123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.736542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.736556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.737007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.737021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.737450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.737464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.737812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.737826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.738168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.738183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.738527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.738541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.738996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.739010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.739356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.739370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.739760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.739774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.740236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.740251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.740637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.740651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.741265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.741280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.741680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.741694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.742122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.742136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.742590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.742604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.743003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.743017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.743437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.743451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.187 qpair failed and we were unable to recover it. 00:26:58.187 [2024-07-24 20:02:49.743854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.187 [2024-07-24 20:02:49.743868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.744290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.744305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.744768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.744782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.745178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.745195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.745538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.745552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.746014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.746028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.746485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.746499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.746840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.746854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.747325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.747340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.747668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.747682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.748109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.748123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.748514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.748528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.748864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.748878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.749266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.749280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.749672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.749685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.750186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.750200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.750610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.750624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.751018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.751032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.751434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.751448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.751918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.751931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.752380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.752394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.752741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.752754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.753156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.753171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.753578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.753592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.754066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.754080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.754497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.754510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.188 [2024-07-24 20:02:49.754858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.188 [2024-07-24 20:02:49.754872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.188 qpair failed and we were unable to recover it. 00:26:58.458 [2024-07-24 20:02:49.755350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.458 [2024-07-24 20:02:49.755366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.458 qpair failed and we were unable to recover it. 00:26:58.458 [2024-07-24 20:02:49.755874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.458 [2024-07-24 20:02:49.755888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.458 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.756247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.756261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.756711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.756727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.757208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.757222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.757513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.757528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.757877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.757891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.758304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.758318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.758747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.758761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.759169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.759183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.759610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.759625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.759970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.759983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.760377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.760392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.760669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.760682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.761020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.761034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.761438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.761452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.761906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.761920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.762562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.762576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.762991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.763005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.763478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.763492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.763848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.763861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.764288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.764303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.764779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.764793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.765201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.765216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.765628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.765642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.766093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.766108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.766520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.766534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.766922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.766935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.767333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.767346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.767745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.767759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.768163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.768178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.768511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.768525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.768976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.768990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.769339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.769353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.769802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.769815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.770157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.770170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.770566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.770581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.770964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.459 [2024-07-24 20:02:49.770978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.459 qpair failed and we were unable to recover it. 00:26:58.459 [2024-07-24 20:02:49.771428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.771442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.771675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.771689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.772143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.772158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.772557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.772571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.772974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.772988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.773330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.773344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.773839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.773853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.774082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.774096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.774434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.774448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.774588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.774601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.774900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.774913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.775362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.775377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.775504] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:26:58.460 [2024-07-24 20:02:49.775551] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.460 [2024-07-24 20:02:49.775827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.775842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.776295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.776310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.776783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.776797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.777126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.777140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.777528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.777542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.777889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.777903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.778385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.778406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.778823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.778837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.779186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.779201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.779675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.779689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.780088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.780102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.780437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.780451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.780901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.780914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.781186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.781200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.781532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.781546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.781864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.781878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.782259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.782274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.782660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.782674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.783147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.783162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.783565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.783579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.783977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.783991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.784452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.784467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.784918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.784932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.460 [2024-07-24 20:02:49.785272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.460 [2024-07-24 20:02:49.785287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.460 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.785747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.785761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.786211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.786227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.786655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.786669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.787073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.787088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.787482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.787496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.787781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.787795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.788192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.788206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.788684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.788698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.789172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.789187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.789349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.789363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.789817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.789831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.790039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.790068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.790524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.790538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.790954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.790968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.791404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.791419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.791754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.791768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.792148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.792162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.792580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.792594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.793067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.793083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.793472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.793486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.793903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.793918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.794391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.794405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.794872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.794886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.795372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.795403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.795433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e62ff0 (9): Bad file descriptor 00:26:58.461 [2024-07-24 20:02:49.795880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.795902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.796237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.796253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.796683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.796698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.797187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.797201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.797593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.797607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.797816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.797830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.798223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.798238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.798584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.798598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.799055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.799070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.799471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.799485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.799935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.799949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.461 [2024-07-24 20:02:49.800420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.461 [2024-07-24 20:02:49.800434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.461 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.800829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.800844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.801066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.801080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.801532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.801546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.801927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.801941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.802326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.802340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.802814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.802828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.803243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.803257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.803681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.803695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.804087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.804102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.804258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.804271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.462 [2024-07-24 20:02:49.804627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.804641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.805093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.805107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.805553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.805567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.806021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.806035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.806513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.806527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.806976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.806990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.807438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.807452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.807915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.807929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.808406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.808420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.808847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.808861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.809336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.809350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.809797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.809810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.810297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.810311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.810828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.810842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.811176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.811191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.811587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.811601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.811961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.811975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.812383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.812398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.812816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.812830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.813260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.813274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.813671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.813685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.814030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.814048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.814396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.814410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.814808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.814821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.815254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.815268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.462 [2024-07-24 20:02:49.815613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.462 [2024-07-24 20:02:49.815627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.462 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.816081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.816096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.816482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.816496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.816709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.816723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.816885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.816898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.817307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.817321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.817931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.817945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.818524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.818539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.819186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.819201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.819597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.819611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.819937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.819950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.820344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.820359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.820762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.820776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.821244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.821258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.821591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.821605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.821778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.821792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.822158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.822187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.822358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.822370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.822694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.822708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.823168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.823179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.823573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.823584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.824149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.824160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.824531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.824541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.824985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.824995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.825383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.825393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.825713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.825724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.826053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.826063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.826400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.826411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.463 qpair failed and we were unable to recover it. 00:26:58.463 [2024-07-24 20:02:49.826786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.463 [2024-07-24 20:02:49.826795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.827176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.827186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.827638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.827649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.828232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.828242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.828631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.828642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.829056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.829066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.829405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.829415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.829941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.829952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.830296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.830306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.830666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.830677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.831063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.831075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.831585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.831596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.831991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.832001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.832446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.832456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.832789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.832799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.833194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.833205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.833350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.833361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.833679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.833690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.834031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.834041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.834429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.834440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.834826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.834837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.835219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.835230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.835575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.835585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.835913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.835924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.836317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.836328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.836712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.836723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.837055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.837066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.837390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.837401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.837942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.837952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.838468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.838479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.838880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.838893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.839303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.839313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.839531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.839541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.839986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.839996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.840325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.840335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.840882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.840893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.464 qpair failed and we were unable to recover it. 00:26:58.464 [2024-07-24 20:02:49.841245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.464 [2024-07-24 20:02:49.841255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.841687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.841698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.842082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.842093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.842494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.842505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.842802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.842812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.843139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.843151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.843530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.843541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.843866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.843876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.844351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.844363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.844692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.844702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.845030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.845040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.845415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.845425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.845845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.845856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.846177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.846189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.846588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.846598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.847046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.847057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.847519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.847529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.847918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.847928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.848320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.848332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.848732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.848743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.849131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.849141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.849546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.849556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.849879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.849889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.849938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:58.465 [2024-07-24 20:02:49.850346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.850357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.850674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.850685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.851141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.851152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.851533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.851543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.851928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.851939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.852319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.852330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.852706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.852716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.853040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.853055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.853444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.853455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.853832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.853843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.854177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.854188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.854595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.854606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.854992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.465 [2024-07-24 20:02:49.855003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.465 qpair failed and we were unable to recover it. 00:26:58.465 [2024-07-24 20:02:49.855326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.855338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.855690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.855702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.856094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.856106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.856499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.856511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.856672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.856683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.857025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.857037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.857371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.857382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.857778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.857790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.858413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.858426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.858811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.858823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.859228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.859241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.859614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.859630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.859954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.859965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.860346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.860357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.860694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.860704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.861116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.861127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.861448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.861459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.861859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.861869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.862246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.862256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.862585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.862595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.862916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.862926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.863255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.863266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.863644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.863654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.864048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.864058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.864390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.864400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.864791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.864802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.865145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.865156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.865488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.865499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.865676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.865686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.866013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.866023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.866357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.866367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.866820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.866830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.867145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.867157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.867531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.867542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.867949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.867959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.868286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.868296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.466 [2024-07-24 20:02:49.868624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.466 [2024-07-24 20:02:49.868634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.466 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.869012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.869022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.869478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.869490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.869939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.869950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.870289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.870300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.870638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.870648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.870980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.870990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.871389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.871400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.871727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.871737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.872058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.872069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.872459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.872469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.872804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.872814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.873199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.873210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.873527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.873537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.873918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.873929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.874252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.874265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.874685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.874696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.875025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.875035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.875569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.875580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.875952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.875963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.876356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.876366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.876751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.876761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.877195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.877205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.877365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.877375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.877844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.877854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.878031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.878041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.878385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.878396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.878874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.878884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.879195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.879205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.879515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.879525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.879972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.879983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.880366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.880378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.880701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.880712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.881280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.881292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.881682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.881692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.882077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.882088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.882464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.467 [2024-07-24 20:02:49.882475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.467 qpair failed and we were unable to recover it. 00:26:58.467 [2024-07-24 20:02:49.882746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.882756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.883100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.883111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.883496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.883506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.883895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.883906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.884284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.884295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.884436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.884446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.884893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.884904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.885349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.885359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.885805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.885816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.886204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.886216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.886545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.886556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.887209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.887221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.887550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.887563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.887873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.887889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.888233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.888249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.888642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.888657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.888995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.889008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.889400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.889413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.889829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.889846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.890192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.890204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.890440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.890451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.890838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.890849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.891261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.891272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.891479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.891489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.891687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.891698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.892060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.892072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.892400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.892411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.468 [2024-07-24 20:02:49.892720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.468 [2024-07-24 20:02:49.892730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.468 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.893130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.893142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.893480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.893491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.893891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.893903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.894309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.894321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.894716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.894727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.895047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.895058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.895381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.895392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.895622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.895633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.896032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.896051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.896441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.896453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.896857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.896868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.897204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.897215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.897618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.897628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.898069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.898080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.898482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.898493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.898832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.898842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.899308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.899318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.899786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.899796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.900265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.900275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.900594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.900604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.901051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.901062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.901501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.901512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.901793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.901803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.902193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.902203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.902595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.902605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.903005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.903016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.903351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.903361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.903742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.903752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.904151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.904162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.904500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.904510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.904895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.904907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.905234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.905245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.905648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.905659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.905897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.905907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.906349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.906360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.906769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.469 [2024-07-24 20:02:49.906779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.469 qpair failed and we were unable to recover it. 00:26:58.469 [2024-07-24 20:02:49.907158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.907168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.907639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.907650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.907853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.907863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.908199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.908209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.908650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.908661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.908993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.909004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.909471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.909482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.909829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.909839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.910247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.910257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.910620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.910629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.910964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.910974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.911299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.911309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.911722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.911732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.912132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.912143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.912551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.912561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.912953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.912963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.913345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.913355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.913687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.913697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.914020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.914030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.914410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.914420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.914741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.914751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.915134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.915145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.915468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.915478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.915869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.915879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.916305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.916317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.916635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.916646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.916989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.916999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.917466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.917476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.917810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.917820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.918262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.918273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.918622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.918632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.919071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.919082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.919505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.919515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.919896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.919906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.920325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.920338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.920804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.470 [2024-07-24 20:02:49.920814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.470 qpair failed and we were unable to recover it. 00:26:58.470 [2024-07-24 20:02:49.921156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.921166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.921607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.921617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.921843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.921853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.922198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.922208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.922553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.922563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.922892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.922902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.923256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.923266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.923583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.923593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.923919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.923929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.924316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.924326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.924705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.924715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.925057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.925068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.925526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.925538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.925979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.925989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.926397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.926408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.926863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.926873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.927213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.927224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.927637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.927648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.928026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.928037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.928480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.928493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.928933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.928944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.929013] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.471 [2024-07-24 20:02:49.929051] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.471 [2024-07-24 20:02:49.929059] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.471 [2024-07-24 20:02:49.929065] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.471 [2024-07-24 20:02:49.929070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.471 [2024-07-24 20:02:49.929188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:58.471 [2024-07-24 20:02:49.929325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.929336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.929335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:58.471 [2024-07-24 20:02:49.929442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:58.471 [2024-07-24 20:02:49.929443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:58.471 [2024-07-24 20:02:49.929652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.929664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.930075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.930086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.930531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.930542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.931029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.931040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.931510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.931521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.931668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.931678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.932087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.932098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.932408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.932418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.932882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.932893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.933340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.933350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.933754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.471 [2024-07-24 20:02:49.933764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.471 qpair failed and we were unable to recover it. 00:26:58.471 [2024-07-24 20:02:49.934204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.934214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.934610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.934620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.935048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.935062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.935441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.935452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.935915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.935926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.936388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.936400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.936870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.936880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.937299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.937310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.937595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.937605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.938048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.938059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.938536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.938547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.938988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.938998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.939478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.939490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.939886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.939897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.940292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.940303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.940685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.940697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.941141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.941154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.941618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.941630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.942040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.942055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.942286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.942297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.942703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.942715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.943171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.943183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.943562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.943573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.943905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.943918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.944361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.944373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.944814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.944826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.945161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.945173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.945569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.945581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.946045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.946059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.946533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.946546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.946985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.946998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.947387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.947399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.947861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.947874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.948262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.948275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.948737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.948749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.949214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.472 [2024-07-24 20:02:49.949226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.472 qpair failed and we were unable to recover it. 00:26:58.472 [2024-07-24 20:02:49.949610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.949622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.950064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.950076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.950542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.950554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.951016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.951028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.951423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.951436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.951837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.951848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.952319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.952336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.952799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.952810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.953212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.953224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.953560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.953570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.954035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.954051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.954467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.954479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.954796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.954806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.955086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.955097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.955489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.955500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.955959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.955971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.956386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.956397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.956864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.956874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.957227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.957239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.957678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.957689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.958157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.958168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.958579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.958591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.959054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.959065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.959456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.959467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.959851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.959862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.960351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.960362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.960775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.960786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.961187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.961198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.473 [2024-07-24 20:02:49.961592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.473 [2024-07-24 20:02:49.961603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.473 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.961990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.962001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.962304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.962315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.962766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.962778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.963185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.963196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.963591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.963602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.964066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.964077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.964539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.964550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.965036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.965051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.965469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.965480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.965946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.965956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.966344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.966355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.966765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.966775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.967204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.967215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.967556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.967566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.968025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.968035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.968459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.968469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.968862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.968872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.969200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.969213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.969686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.969697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.970093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.970104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.970571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.970582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.970975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.970985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.971464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.971475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.971861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.971873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.972316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.972327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.972767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.972778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.973118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.973128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.973530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.973542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.973970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.973982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.974447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.974459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.974862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.974872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.975344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.975356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.975794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.975806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.976211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.976222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.976612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.976623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.977090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.474 [2024-07-24 20:02:49.977101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.474 qpair failed and we were unable to recover it. 00:26:58.474 [2024-07-24 20:02:49.977583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.977594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.977973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.977983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.978372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.978384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.978888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.978899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.979340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.979351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.979729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.979740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.980120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.980130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.980336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.980346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.980821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.980832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.981275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.981286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.981667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.981678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.982145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.982156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.982567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.982577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.982809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.982820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.983228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.983239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.983723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.983734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.984175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.984187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.984515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.984525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.984995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.985005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.985409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.985419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.985872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.985882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.986223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.986236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.986707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.986717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.987090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.987100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.987598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.987608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.987985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.987995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.988447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.988457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.988899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.988909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.989301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.989311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.989751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.989761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.990140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.990150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.990541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.990551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.990966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.990976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.991419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.991429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.991749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.991759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.992200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.992211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.475 qpair failed and we were unable to recover it. 00:26:58.475 [2024-07-24 20:02:49.992680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.475 [2024-07-24 20:02:49.992691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:49.993107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:49.993117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:49.993508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:49.993518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:49.993906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:49.993916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:49.994597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:49.994607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:49.994992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:49.995002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:49.995443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:49.995454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:49.995846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:49.995856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:49.996258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:49.996268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:49.996734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:49.996744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:49.997147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:49.997158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:49.997559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:49.997569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:49.997966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:49.997976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:49.998368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:49.998378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:49.998851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:49.998861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:49.999301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:49.999312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:49.999592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:49.999602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.000080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.000091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.000409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.000419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.000888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.000899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.001393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.001404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.001747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.001758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.002099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.002109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.002553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.002563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.002948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.002959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.003375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.003387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.003890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.003900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.004319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.004330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.004736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.004752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.005169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.005187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.005729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.005749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.006354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.006373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.006853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.006870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.007383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.007403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.007769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.007795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.008174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.476 [2024-07-24 20:02:50.008188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.476 qpair failed and we were unable to recover it. 00:26:58.476 [2024-07-24 20:02:50.008596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.008609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.008938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.008949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.009346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.009358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.009720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.009731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.010057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.010068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.010418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.010428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.010836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.010846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.011232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.011243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.011561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.011571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.011897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.011908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.012381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.012392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.012787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.012797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.013200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.013211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.013421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.013431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.013759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.013770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.014162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.014173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.014500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.014510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.014968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.014978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.015373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.015384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.015836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.015847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.016310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.016321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.016671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.016682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.017160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.017171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.017574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.017585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.018219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.018229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.018560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.018571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.018962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.018973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.019383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.019393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.019769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.019780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.020171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.020185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.020570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.020580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.020971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.020981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.021524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.021534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.021864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.021874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.022261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.022272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.022675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.477 [2024-07-24 20:02:50.022685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.477 qpair failed and we were unable to recover it. 00:26:58.477 [2024-07-24 20:02:50.023075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.023086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.023530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.023540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.023862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.023872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.024201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.024211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.024821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.024831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.025250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.025260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.025644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.025655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.026053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.026064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.026406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.026416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.026857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.026868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.027206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.027217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.027614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.027625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.028029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.028039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.028459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.028470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.028852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.028863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.029260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.029271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.029688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.029699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.030110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.030122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.030509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.030519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.030842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.030853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.031249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.031260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.031464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.031475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.031837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.031847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.032322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.032332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.032687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.032697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.033027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.033037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.033359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.033370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.033714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.033724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.034167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.034177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.034585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.478 [2024-07-24 20:02:50.034595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.478 qpair failed and we were unable to recover it. 00:26:58.478 [2024-07-24 20:02:50.034979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.034989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.035395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.035405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.035726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.035736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.036071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.036083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.036585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.036595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.037072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.037082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.037398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.037408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.037795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.037805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.038198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.038208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.038675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.038685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.039127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.039138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.039546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.039556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.039945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.039955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.040423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.040434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.040857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.040867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.041257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.041267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.041590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.041601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.041990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.042001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.042330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.042340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.042793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.042804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.043277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.043288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.043767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.043777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.044168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.044178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.044585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.044595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.479 [2024-07-24 20:02:50.045010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.479 [2024-07-24 20:02:50.045021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.479 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.045434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.045446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.045775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.045785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.046210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.046220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.046698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.046709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.046918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.046927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.047327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.047338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.047776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.047795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.048227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.048244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.048699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.048716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.049073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.049089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.049469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.049490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.049855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.049872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.050361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.050414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.050813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.050844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.051417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.051446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.051848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.051862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.052290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.052306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.052663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.052675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.053010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.053023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.053355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.053366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.053690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.053700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.054156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.054167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.054499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.054509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.054901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.054911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.055374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.055385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.055698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.055708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.056117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.056127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.748 [2024-07-24 20:02:50.056570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.748 [2024-07-24 20:02:50.056580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.748 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.057047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.057058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.057521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.057531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.057867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.057877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.058197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.058208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.058605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.058616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.058952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.058962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.059355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.059366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.059834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.059844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.060225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.060236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.060612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.060623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.060967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.060977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.061389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.061399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.061786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.061797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.062182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.062192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.062611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.062621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.062952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.062962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.063401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.063412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.063885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.063896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.064335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.064345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.064758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.064768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.065232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.065242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.065709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.065720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.066112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.066123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.066460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.066470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.066922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.066933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.067137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.067148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.067416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.067426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.067838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.067848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.068310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.068320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.068775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.068785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.069248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.069260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.069668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.069678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.070069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.070079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.070465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.070475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.070942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.070952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.071321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.071331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.071776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.071786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.071989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.071999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.072375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.072385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.072790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.072800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.073119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.073130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.073573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.073583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.073967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.073977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.074416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.074427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.074814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.074824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.075210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.075220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.075668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.075678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.076138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.076148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.076610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.076620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.077041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.077054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.077431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.077441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.077860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.077870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.078216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.078228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.078619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.078629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.079048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.079058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.079404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.079415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.079903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.079913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.080407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.080418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.080812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.080822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.081209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.081219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.081623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.081633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.082090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.082101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.082524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.082534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.082938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.082948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.083400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.083413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.083809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.083820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.084285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.084296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.084680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.084691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.749 qpair failed and we were unable to recover it. 00:26:58.749 [2024-07-24 20:02:50.085155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.749 [2024-07-24 20:02:50.085165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.085618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.085628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.086019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.086032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.086420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.086430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.086847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.086857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.087143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.087153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.087635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.087645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.088097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.088107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.088452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.088462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.088873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.088884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.089270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.089280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.089717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.089727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.090109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.090120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.090504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.090515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.090902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.090912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.091328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.091338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.091835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.091846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.092226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.092237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.092628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.092638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.093019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.093029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.093415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.093426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.093846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.093856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.094091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.094101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.094565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.094575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.094968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.094978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.095346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.095356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.095803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.095813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.096275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.096285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.096678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.096688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.097087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.097098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.097546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.097556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.097964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.097973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.098362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.098373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.098762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.098772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.099184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.099194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.099624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.099634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.100097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.100108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.100492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.100502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.100834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.100845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.101236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.101247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.101625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.101635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.102082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.102092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.102532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.102544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.102871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.102881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.103343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.103354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.103755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.103765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.104230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.104241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.104581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.104591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.105058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.105068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.105507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.105517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.105961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.105971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.106366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.106376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.106765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.106775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.107115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.107126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.107501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.107511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.107916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.107926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.108370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.108380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.108709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.108719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.109033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.109046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.109421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.109430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.109766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.109776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.110238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.110248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.110697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.110707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.111093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.111103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.111545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.111555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.111890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.111900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.112306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.112316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.112703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.750 [2024-07-24 20:02:50.112713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.750 qpair failed and we were unable to recover it. 00:26:58.750 [2024-07-24 20:02:50.113117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.113127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.113526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.113536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.113921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.113931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.114331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.114342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.114731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.114741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.115119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.115129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.115510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.115520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.115854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.115864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.116259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.116269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.116657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.116667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.117073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.117084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.117482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.117492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.117878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.117888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.118281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.118291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.118681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.118691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.119068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.119079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.119541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.119552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.119994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.120004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.120313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.120323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.120701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.120710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.121100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.121110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.121555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.121565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.122006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.122016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.122456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.122467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.122845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.122855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.123297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.123307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.123749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.123760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.124029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.124039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.124382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.124393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.124714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.124724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.125189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.125199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.125596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.125606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.125917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.125927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.126380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.126390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.126831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.126842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.127285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.127295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.127760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.127770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.128151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.128161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.128538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.128548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.129018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.129028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.129498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.129508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.129976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.129988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.130426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.130437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.130830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.130840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.131282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.131293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.131692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.131702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.132103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.132113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.132575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.132586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.132975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.132984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.133335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.133346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.133743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.133753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.134193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.134203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.134642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.134652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.135051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.135061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.135452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.135462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.135853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.135864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.136330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.136351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.136742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.136752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.137137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.137147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.137596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.137606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.751 [2024-07-24 20:02:50.137838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.751 [2024-07-24 20:02:50.137849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.751 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.138236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.138246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.138649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.138659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.138996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.139006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.139342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.139352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.139564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.139573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.140036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.140050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.140375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.140385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.140732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.140742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.141126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.141137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.141513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.141523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.141856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.141866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.142315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.142326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.142707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.142717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.143107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.143117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.143518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.143528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.143858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.143868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.144332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.144342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.144783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.144794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.145128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.145138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.145547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.145557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.145942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.145954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.146337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.146348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.146789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.146799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.147247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.147257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.147583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.147593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.147966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.147976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.148360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.148371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.148702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.148712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.149090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.149101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.149542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.149552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.149966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.149976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.150442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.150452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.150892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.150902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.151350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.151360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.151801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.151811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.152135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.152145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.152610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.152620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.153102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.153113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.153596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.153606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.153934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.153944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.154405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.154416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.154879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.154889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.155332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.155343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.155737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.155747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.156163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.156173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.156514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.156525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.156989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.156999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.157469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.157479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.157986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.157996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.158458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.158469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.158962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.158972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.159419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.159430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.159835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.159844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.160245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.160256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.160694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.160704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.161157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.161168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.161559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.161569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.162035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.162048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.162567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.162577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.162867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.162878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.163321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.163333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.163779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.163789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.164120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.164130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.164552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.164562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.164903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.164913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.165381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.165392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.165578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.165588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.165927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.165937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.166330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.752 [2024-07-24 20:02:50.166341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.752 qpair failed and we were unable to recover it. 00:26:58.752 [2024-07-24 20:02:50.166736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.166746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.167191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.167202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.167590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.167600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.168007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.168017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.168406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.168416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.168886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.168896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.169302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.169313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.169716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.169726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.170169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.170180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.170641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.170651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.171050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.171061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.171539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.171549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.171889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.171899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.172337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.172347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.172793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.172803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.173292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.173303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.173704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.173714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.174181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.174191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.174584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.174595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.174987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.174997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.175397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.175407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.175866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.175876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.176339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.176350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.176854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.176864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.177301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.177312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.177773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.177783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.178254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.178264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.178748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.178758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.179241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.179251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.179652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.179662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.180053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.180064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.180528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.180540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.180981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.180991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.181420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.181431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.181828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.181838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.182218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.182228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.182556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.182566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.183030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.183040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.183430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.183441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.183843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.183853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.184263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.184274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.184763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.184773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.185266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.185276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.185752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.185762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.186232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.186243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.186630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.186640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.187110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.187121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.187558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.187568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.187978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.187988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.188372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.188382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.188801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.188811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.189259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.189270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.189736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.189746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.190199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.190209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.190676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.190685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.191138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.191149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.191575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.191585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.191980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.191990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.192452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.192462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.192904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.192914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.193301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.193312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.193771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.193782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.194179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.194190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.753 qpair failed and we were unable to recover it. 00:26:58.753 [2024-07-24 20:02:50.194614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.753 [2024-07-24 20:02:50.194624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.195011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.195021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.195487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.195498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.195938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.195948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.196412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.196422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.196875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.196885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.197319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.197330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.197718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.197727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.198065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.198077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.198490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.198500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.198894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.198904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.199368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.199379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.199713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.199723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.200189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.200199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.200687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.200697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.201149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.201160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.201563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.201573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.202010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.202020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.202473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.202483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.202814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.202825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.203293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.203304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.203792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.203803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.204282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.204292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.204703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.204714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.205161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.205172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.205636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.205645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.206057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.206067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.206533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.206544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.206960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.206970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.207435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.207446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.207866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.207876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.208296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.208307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.208794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.208804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.209187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.209197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.209662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.209672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.210091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.210102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.210591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.210601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.211053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.211063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.211492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.211502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.211970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.211979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.212466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.212477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.212816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.212826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.213242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.213253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.213651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.213661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.214106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.214117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.214582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.214593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.215054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.215064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.215509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.215519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.215984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.215996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.216458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.216468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.216893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.216903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.217226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.217236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.217703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.217714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.218154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.218164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.218549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.218559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.218953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.218963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.219425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.219435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.219845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.219855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.220234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.220245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.220727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.220737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.221222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.221233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.221729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.221739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.222211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.222222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.222625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.222635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.223090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.223100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.223470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.223480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.223966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.223976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.224450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.224460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.224923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.754 [2024-07-24 20:02:50.224933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.754 qpair failed and we were unable to recover it. 00:26:58.754 [2024-07-24 20:02:50.225303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.225313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.225717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.225727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.226122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.226133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.226575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.226585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.227079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.227089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.227475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.227485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.227870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.227880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.228320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.228331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.228743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.228753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.229194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.229205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.229647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.229657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.230067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.230078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.230767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.230778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.231245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.231255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.231717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.231727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.232223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.232234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.232654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.232664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.233001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.233011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.233477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.233487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.233875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.233887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.234277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.234288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.234690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.234700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.235145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.235156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.235547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.235557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.235960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.235970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.236381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.236392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.236837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.236847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.237301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.237311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.237649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.237659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.238101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.238111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.238506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.238516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.238922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.238932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.239325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.239335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.239678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.239688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.240151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.240166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.240543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.240554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.241034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.241049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.241520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.241530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.242050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.242060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.242449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.242459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.242886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.242895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.243335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.243346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.243686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.243696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.244024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.244034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.244499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.244510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.244996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.245006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.245494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.245505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.245909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.245919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.246372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.246382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.246825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.246835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.247300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.247311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.247803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.247813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.248192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.248202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.248616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.248626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.249046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.249057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.249436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.249446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.249912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.249922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.250674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.250685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.251160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.251170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.251561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.251573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.252049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.252060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.252509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.252519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.252965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.252975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.253368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.253378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.253850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.253859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.254274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.254284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.254726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.254736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.255190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.255200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.255644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.255654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.256053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.755 [2024-07-24 20:02:50.256063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.755 qpair failed and we were unable to recover it. 00:26:58.755 [2024-07-24 20:02:50.256403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.256413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.256876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.256886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.257291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.257301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.257688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.257698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.258037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.258053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.258493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.258503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.258899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.258909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.259382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.259393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.259844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.259854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.260242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.260252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.260718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.260728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.261194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.261205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.261595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.261605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.261988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.261999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.262331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.262342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.262782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.262792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.263193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.263203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.263661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.263671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.264019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.264030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.264472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.264483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.264896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.264906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.265305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.265316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.265705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.265715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.266185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.266196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.266597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.266607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.266998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.267008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.267458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.267469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.267932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.267942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.268337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.268347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.268829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.268841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.269257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.269267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.269691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.269701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.270148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.270159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.270648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.270658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.271140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.271150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.271606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.271616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.272044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.272055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.272438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.272448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.272834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.272845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.273317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.273328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.273816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.273825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.274245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.274256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.274702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.274713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.275114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.275124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.275520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.275530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.275971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.275981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.276436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.276446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.276913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.276923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.277361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.277370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.277832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.277842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.278234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.278245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.278707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.278717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.279400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.279410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.279894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.279904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.280319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.280329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.280789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.280800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.281242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.281253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.281668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.281678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.282029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.282038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.282465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.282475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.756 [2024-07-24 20:02:50.282899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.756 [2024-07-24 20:02:50.282909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.756 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.283350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.283361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.283751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.283761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.284240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.284250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.284596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.284605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.285100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.285110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.285499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.285510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.286004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.286015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.286483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.286494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.286883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.286894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.287337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.287347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.287750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.287760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.288215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.288226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.288630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.288640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.289151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.289161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.289629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.289639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.290102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.290112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.290552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.290563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.291031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.291045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.291491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.291501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.291943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.291953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.292412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.292423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.292818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.292829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.293220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.293230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.293693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.293703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.294198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.294208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.294610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.294620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.294994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.295004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.295445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.295455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.295919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.295929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.296422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.296432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.296805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.296815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.297198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.297208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.297610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.297620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.298085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.298095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.298532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.298542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.298928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.298938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.299403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.299414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.299820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.299830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.300302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.300313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.300655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.300665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.301104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.301115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.301579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.301588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.302027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.302037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.302513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.302523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.303010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.303020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.303514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.303524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.303968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.303978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.304449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.304459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.304860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.304872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.305330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.305340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.305818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.305828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.306224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.306234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.306630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.306640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.307103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.307114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.307602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.307612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.308096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.308106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.308572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.308582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.309081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.309092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.309554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.309564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.310004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.310014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.310420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.310431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.310885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.310895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.311363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.311373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.311764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.311774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.312212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.312222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.312683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.312693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.313086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.313096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.313488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.313498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.314006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.757 [2024-07-24 20:02:50.314016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.757 qpair failed and we were unable to recover it. 00:26:58.757 [2024-07-24 20:02:50.314487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.314497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.314961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.314972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.315463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.315473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.315925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.315936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.316400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.316410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.316872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.316882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.317273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.317284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.317724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.317734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.318174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.318185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.318622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.318632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.319092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.319103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.319499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.319509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.319968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.319978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.320419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.320430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.320894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.320904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.321395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.321406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.321882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.321893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.322222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.322232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.322628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.322638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.322975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.322988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.323381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.323391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.323850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.323860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.324329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.324339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.324725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.324735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.325199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.325210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.325651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.325661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.326052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.326063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.326522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.326533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.326934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.326944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.327401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.327411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.327851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.327861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.328249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.328260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.328671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.328681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.329170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.329181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.329652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.329663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.330154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.330164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.330594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.330605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.331071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.331081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.331468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.331479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.332013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.332022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.332549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.332559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.333051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.333062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.333533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.333543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.334029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.334061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.334566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.334593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:58.758 [2024-07-24 20:02:50.335006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.758 [2024-07-24 20:02:50.335028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e0000b90 with addr=10.0.0.2, port=4420 00:26:58.758 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.335526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.335569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.336020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.336066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.336542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.336558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.336958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.336972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.337396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.337411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.337885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.337899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.338396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.338410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.338898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.338912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.339362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.339376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.339792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.339806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.340304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.340319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.340791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.340805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.341288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.341302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.341797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.341811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.342288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.342302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.342802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.342816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.343236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.343250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.343746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.343760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.028 [2024-07-24 20:02:50.344237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.028 [2024-07-24 20:02:50.344258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.028 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.344738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.344751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.345160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.345174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.345599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.345612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.346109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.346123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.346594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.346608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.347094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.347108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.347608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.347622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.348095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.348109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.348564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.348578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.349053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.349067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.349548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.349562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.349972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.349986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.350436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.350450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.350930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.350944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.351393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.351407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.351878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.351892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.352383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.352397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.352797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.352811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.353256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.353270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.353667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.353680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.354130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.354144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.354620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.354636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.355118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.355132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.355621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.355635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.356035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.356052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.356464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.356479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.356897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.356910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.357362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.357376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.357850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.357865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.358262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.358276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.358723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.358737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.359131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.359145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.359564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.359578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.360026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.360040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.360516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.360530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.360949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.029 [2024-07-24 20:02:50.360963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.029 qpair failed and we were unable to recover it. 00:26:59.029 [2024-07-24 20:02:50.361437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.361450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.361849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.361863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.362255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.362270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.362725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.362739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.363165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.363179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.363630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.363644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.364096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.364110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.364530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.364544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.365037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.365055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.365531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.365545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.366046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.366060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.366453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.366467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.366899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.366912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.367363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.367378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.367852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.367866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.368369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.368384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.368904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.368918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.369391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.369406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.369888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.369901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.370325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.370340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.370839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.370853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.371329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.371343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.371810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.371824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.372237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.372251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.372730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.372744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.373227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.373243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.373695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.373709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.374182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.374196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.374598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.374611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.375005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.375019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.375501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.375515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.376004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.376018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.376415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.376429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.376883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.376897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.377369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.030 [2024-07-24 20:02:50.377383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.030 qpair failed and we were unable to recover it. 00:26:59.030 [2024-07-24 20:02:50.377870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.377884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.378378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.378392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.378870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.378883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.379301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.379315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.379744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.379757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.380207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.380221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.380674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.380687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.381161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.381175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.381662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.381675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.382174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.382188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.382662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.382676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.383180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.383194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.383714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.383727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.384215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.384230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.384679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.384692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.385162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.385176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.385666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.385680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.386167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.386181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.386681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.386695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.387165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.387179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.387602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.387616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.388088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.388102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.388590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.388603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.388991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.389005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.389466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.389480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.389891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.389905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.390296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.390310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.390771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.390785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.391260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.391274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.391775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.391789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.392186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.392202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.392676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.392690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.393143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.393158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.393497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.393511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.393969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.393983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.031 [2024-07-24 20:02:50.394406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.031 [2024-07-24 20:02:50.394421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.031 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.394840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.394854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.395351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.395364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.395842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.395856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.396361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.396375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.396891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.396905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.397397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.397411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.397887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.397901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.398359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.398373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.398822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.398836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.399309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.399323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.399810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.399823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.400319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.400333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.400810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.400824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.401322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.401336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.401815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.401829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.402227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.402242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.402714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.402727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.403142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.403156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.403652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.403666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.404143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.404157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.404653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.404667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.405193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.405207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.405607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.405621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.405962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.405976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.406453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.406467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.406948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.406962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.407358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.407372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.407890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.407905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.408407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.408421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.408819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.408833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.409303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.409318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.409978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.409992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.410469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.410483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.410983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.410997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.411404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.411421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.032 qpair failed and we were unable to recover it. 00:26:59.032 [2024-07-24 20:02:50.411837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.032 [2024-07-24 20:02:50.411851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.412240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.412255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.412715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.412729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.413406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.413420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.413895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.413909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.414303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.414317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.414741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.414755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.415201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.415215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.415620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.415633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.416134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.416148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.416668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.416683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.417132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.417147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.417548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.417561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.418041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.418058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.418507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.418521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.418990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.419004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.419416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.419430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.419930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.419944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.420344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.420358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.420828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.420842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.421332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.421346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.421796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.421810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.422275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.422289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.422693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.422707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.423103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.423117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.423514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.423528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.423996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.033 [2024-07-24 20:02:50.424010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.033 qpair failed and we were unable to recover it. 00:26:59.033 [2024-07-24 20:02:50.424422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.424437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.424907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.424920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.425378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.425393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.425813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.425827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.426285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.426299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.426803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.426817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.427254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.427268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.427744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.427759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.428240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.428255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.428718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.428732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.429137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.429152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.429534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.429548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.430022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.430038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.430561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.430574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.430985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.430999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.431473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.431488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.431970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.431984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.432454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.432468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.432959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.432973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.433471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.433487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.433963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.433977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.434416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.434430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.434880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.434894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.435296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.435310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.435775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.435790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.436274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.436288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.436705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.436719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.437112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.437126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.437556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.437570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.438029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.438047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.438520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.438534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.438928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.438942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.439411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.439425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.439877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.439891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.440246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.440260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.440648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.440662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.034 [2024-07-24 20:02:50.441141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.034 [2024-07-24 20:02:50.441155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.034 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.441558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.441572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.442021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.442035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.442441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.442455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.442842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.442856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.443254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.443268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.443725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.443739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.444214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.444229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.444623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.444637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.445073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.445087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.445482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.445496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.445798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.445811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.446206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.446220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.446612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.446625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.447101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.447115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.447525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.447539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.447882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.447898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.448444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.448459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.448841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.448855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.449330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.449345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.449814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.449828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.450238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.450252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.450660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.450674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.451148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.451162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.451552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.451566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.451963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.451976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.452461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.452475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.452888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.452902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.453304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.453319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.453720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.453734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.454197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.454212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.454685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.454698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.455170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.455184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.455587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.455601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.035 [2024-07-24 20:02:50.456063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.035 [2024-07-24 20:02:50.456077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.035 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.456475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.456489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.456959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.456973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.457373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.457387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.457858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.457872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.458277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.458291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.458692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.458706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.459181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.459194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.459668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.459682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.460138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.460153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.460548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.460562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.461034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.461052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.461444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.461459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.461929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.461942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.462390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.462405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.462854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.462868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.463361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.463375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.463770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.463784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.464185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.464199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.464653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.464666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.465068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.465081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.465505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.465519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.465920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.465936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.466317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.466331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.466718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.466732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.467184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.467198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.467613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.467627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.468097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.468110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.468580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.468594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.469000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.469014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.469486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.469500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.469899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.469913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.470311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.470325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.470720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.470735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.471204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.471218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.471668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.471682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.036 [2024-07-24 20:02:50.472140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.036 [2024-07-24 20:02:50.472155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.036 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.472638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.472652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.473035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.473052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.473473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.473486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.473972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.473986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.474342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.474357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.474702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.474716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.475061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.475075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.475477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.475491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.475889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.475904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.476352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.476366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.476817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.476831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.477219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.477233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.477734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.477768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.478161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.478178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.478340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.478355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.478779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.478794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.479197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.479213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.479608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.479621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.480022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.480035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.480459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.480474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.480868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.480882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.481271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.481285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.481738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.481752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.482245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.482259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.482600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.482613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.483069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.483084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.483526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.483540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.484011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.484024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.484442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.484457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.484857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.484872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.485283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.485297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.485698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.485712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.486159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.486174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.486578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.486592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.487008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.487021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.487435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.487450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.037 [2024-07-24 20:02:50.487923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.037 [2024-07-24 20:02:50.487936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.037 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.488328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.488342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.488739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.488753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.489142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.489159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.489578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.489591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.490039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.490058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.490527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.490541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.490880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.490894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.491293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.491307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.491752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.491766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.492159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.492173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.492561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.492575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.492994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.493008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.493339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.493353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.493829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.493842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.494320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.494334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.494727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.494740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.495131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.495145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.495550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.495564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.496011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.496025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.496444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.496458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.496932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.496946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.497278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.497292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.497764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.497777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.498126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.498141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.498547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.498561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.499033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.499055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.499488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.499502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.499950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.499964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.500311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.500325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.500769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.038 [2024-07-24 20:02:50.500786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.038 qpair failed and we were unable to recover it. 00:26:59.038 [2024-07-24 20:02:50.501189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.501203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.501540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.501554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.501968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.501982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.502443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.502457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.502837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.502850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.503250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.503264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.503737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.503751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.504198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.504212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.504627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.504642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.505050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.505064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.505539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.505552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.505709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.505722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.506169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.506183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.506613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.506627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.506855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.506869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.507338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.507352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.507706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.507720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.508211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.508225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.508714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.508727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.509069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.509083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.509480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.509494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.509940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.509954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.510402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.510417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.510867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.510882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.511351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.511366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.511760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.511773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.512241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.512258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.512718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.512733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.513182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.513196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.513552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.513566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.514012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.514027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.514503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.514518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.514910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.514924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.515375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.515390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.515733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.515747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.516147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.516161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.516560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.039 [2024-07-24 20:02:50.516574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.039 qpair failed and we were unable to recover it. 00:26:59.039 [2024-07-24 20:02:50.516962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.516977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.517365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.517380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.517776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.517789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.518241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.518256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.518734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.518749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.519200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.519214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.519689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.519704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.520176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.520190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.520704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.520718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.521181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.521195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.521597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.521612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.522006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.522020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.522371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.522385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.522808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.522822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.523220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.523235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.523637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.523651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.524001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.524015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.524355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.524370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.524771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.524786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.525261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.525275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.525749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.525764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.526161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.526176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.526579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.526594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.526927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.526941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.527276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.527290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.527715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.527729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.528220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.528234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.528633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.528646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.529282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.529296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.529587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.529603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.530002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.530017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.530445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.530459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.530843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.530858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.531276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.531291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.531766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.531781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.532133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.532149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.532557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.040 [2024-07-24 20:02:50.532571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.040 qpair failed and we were unable to recover it. 00:26:59.040 [2024-07-24 20:02:50.532975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.532989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.533390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.533405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.533793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.533807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.534154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.534169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.534556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.534571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.534822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.534836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.535239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.535254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.535672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.535686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.536033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.536051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.536380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.536394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.536726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.536740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.537110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.537125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.537508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.537522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.537937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.537951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.538296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.538310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.538706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.538720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.539112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.539126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.539525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.539539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.539953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.539968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.540376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.540392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.540719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.540737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.541266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.541282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.541683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.541697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.542094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.542109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.542582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.542596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.542996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.543013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.543416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.543430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.543715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.543729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.544077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.544091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.544477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.544491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.544880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.544894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.545380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.545394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.545795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.545810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.546139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.546154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.546489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.546503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.546886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.546901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.041 qpair failed and we were unable to recover it. 00:26:59.041 [2024-07-24 20:02:50.547137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.041 [2024-07-24 20:02:50.547151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.547551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.547565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.548016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.548030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54f30 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.548358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.548376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.548853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.548868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.549268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.549284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.549495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.549509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.549985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.549999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.550399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.550414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.550807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.550821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.551147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.551162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.551551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.551568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.552048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.552063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.552459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.552473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.552765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.552778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.553120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.553135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.553535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.553549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.553943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.553957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.554345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.554360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.554777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.554791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.555270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.555285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.555670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.555683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.556155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.556170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.556631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.556645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.556990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.557005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.557247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.557263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.557668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.557682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.558030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.558048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.558433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.558447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.558794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.558808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.559279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.559294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.559688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.559702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.560091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.560106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.560494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.560508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.560962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.560976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.561362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.561377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.042 qpair failed and we were unable to recover it. 00:26:59.042 [2024-07-24 20:02:50.561767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.042 [2024-07-24 20:02:50.561781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.562172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.562187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.562606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.562620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.563055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.563069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.563524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.563538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.563957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.563972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.564445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.564460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.564671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.564685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.565162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.565176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.565577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.565591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.566065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.566079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.566459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.566473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.566943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.566958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.567368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.567382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.567852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.567866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.568272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.568288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.568688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.568702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.569151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.569165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.569554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.569568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.569967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.569981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.570374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.570389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.570785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.570799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.571182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.571196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.571624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.571638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.572016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.572030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.572503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.572516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.572912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.572926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.573268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.573282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.573754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.573768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.574103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.574118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.574567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.574581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.043 qpair failed and we were unable to recover it. 00:26:59.043 [2024-07-24 20:02:50.575057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.043 [2024-07-24 20:02:50.575071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.575502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.575516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.575746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.575760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.576149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.576163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.576641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.576655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.577132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.577147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.577549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.577563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.577999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.578013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.578519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.578534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.578985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.578999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.579435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.579449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.579928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.579942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.580436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.580450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.580925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.580939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.581424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.581439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.581839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.581853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.582304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.582318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.582808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.582822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.583219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.583233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.583707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.583720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.584196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.584211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.584596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.584610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.585073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.585087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.585602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.585616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.586158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.586175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.586467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.586481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.586824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.586838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.587329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.587343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.587812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.587826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.588252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.588266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.588557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.588571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.588964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.588978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.589448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.589462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.589876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.589889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.044 qpair failed and we were unable to recover it. 00:26:59.044 [2024-07-24 20:02:50.590351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.044 [2024-07-24 20:02:50.590365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.590785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.590798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.591294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.591308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.591741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.591755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.592235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.592249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.592647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.592661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.593113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.593127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.593597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.593610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.594015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.594029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.594431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.594445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.594842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.594856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.595335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.595349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.595798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.595811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.596215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.596229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.596624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.596638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.596909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.596923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.597418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.597432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.597888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.597902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.598302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.598316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.598735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.598749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.599222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.599236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.599723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.599737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.600233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.600247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.600636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.600650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.601052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.601067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.601510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.601524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.601977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.601991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.602451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.602466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:59.045 [2024-07-24 20:02:50.602986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.603003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:59.045 [2024-07-24 20:02:50.603347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.603365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:59.045 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:59.045 [2024-07-24 20:02:50.603828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.603844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.045 [2024-07-24 20:02:50.604258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.604274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.604742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.604757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.605263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.605278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.045 [2024-07-24 20:02:50.605780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.045 [2024-07-24 20:02:50.605794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.045 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.606291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.606306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.606725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.606740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.607214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.607229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.607658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.607672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.608104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.608118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.608559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.608573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.608976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.608991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.609466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.609481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.609886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.609901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.610371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.610386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.610855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.610869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.611285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.611299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.611766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.611781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.612313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.612328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.612745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.612759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.613234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.613249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.613745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.613760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.046 [2024-07-24 20:02:50.614247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.046 [2024-07-24 20:02:50.614261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.046 qpair failed and we were unable to recover it. 00:26:59.310 [2024-07-24 20:02:50.614718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.310 [2024-07-24 20:02:50.614733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.310 qpair failed and we were unable to recover it. 00:26:59.310 [2024-07-24 20:02:50.615255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.310 [2024-07-24 20:02:50.615270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.310 qpair failed and we were unable to recover it. 00:26:59.310 [2024-07-24 20:02:50.615674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.310 [2024-07-24 20:02:50.615689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.310 qpair failed and we were unable to recover it. 00:26:59.310 [2024-07-24 20:02:50.616145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.310 [2024-07-24 20:02:50.616159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.310 qpair failed and we were unable to recover it. 00:26:59.310 [2024-07-24 20:02:50.616561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.310 [2024-07-24 20:02:50.616575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.310 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.617000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.617014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.617474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.617488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.618006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.618020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.618507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.618522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.619129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.619144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.619624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.619639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.620121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.620136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.620483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.620497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.621151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.621167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.621618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.621632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.622140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.622158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.622553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.622567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.623063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.623077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.623550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.623564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.624160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.624175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.624649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.624663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.625108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.625122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.625480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.625494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.625956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.625970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.626380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.626394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.626796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.626811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.627207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.627222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.627620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.627634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.627985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.627998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.628407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.628421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.628913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.628927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.629429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.629445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.629805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.629818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.630229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.630243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.630654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.630668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.631072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.631087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.631537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.631550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.631959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.631974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.632419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.632433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.632836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.311 [2024-07-24 20:02:50.632850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.311 qpair failed and we were unable to recover it. 00:26:59.311 [2024-07-24 20:02:50.633209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.633224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.633773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.633787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.634203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.634218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.634616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.634630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.635034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.635063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.635506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.635520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.635872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.635887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.636304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.636320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.636723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.636737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.637224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.637239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.637566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.637581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.638090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.638104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.312 [2024-07-24 20:02:50.638556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.638572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:59.312 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.312 [2024-07-24 20:02:50.639104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.639119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.312 [2024-07-24 20:02:50.639478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.639494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.639926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.639941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.640354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.640370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.640816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.640829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.641509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.641523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.642000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.642014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.642442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.642456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.642804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.642818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.643291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.643305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.643660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.643674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.644078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.644093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.644489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.644503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.644848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.644862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.645396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.645411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.645775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.645789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.646261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.646275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.646758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.646773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.647249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.647263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.647683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.647697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.648211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.648225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.312 [2024-07-24 20:02:50.648651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.312 [2024-07-24 20:02:50.648666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.312 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.649106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.649121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.649567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.649582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.650064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.650079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.650434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.650448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.650885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.650900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.651407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.651424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.651788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.651804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.652241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.652257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.652693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.652709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.653139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.653156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.653655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.653672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.654148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.654164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.654572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.654587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.655026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.655045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.655453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.655469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.655868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.655884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.656361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.656385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.656865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.656880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.657377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.657395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.657844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.657858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.658312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.658326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 Malloc0 00:26:59.313 [2024-07-24 20:02:50.658724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.658738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.313 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:59.313 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.313 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.313 [2024-07-24 20:02:50.659215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.659230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.659684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.659698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.660175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.660189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.660643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.660657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.661137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.661151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.661577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.661590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.662028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.662041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.662186] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.313 [2024-07-24 20:02:50.662521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.662535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.663005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.663019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.663501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.663516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.663968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.663982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.664457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.313 [2024-07-24 20:02:50.664471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.313 qpair failed and we were unable to recover it. 00:26:59.313 [2024-07-24 20:02:50.664925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.664939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.665402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.665416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.665856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.665870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.666243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.666257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.666661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.666674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.667086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.667101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.667572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.667585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.668076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.668091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.668539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.668553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.669027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.669041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.669513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.669527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.669963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.669977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.314 [2024-07-24 20:02:50.670427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.670442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:59.314 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.314 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.314 [2024-07-24 20:02:50.670948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.670962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.671434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.671449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.671874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.671888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.672364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.672378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.672884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.672898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.673316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.673330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.673756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.673770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.674172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.674188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.674587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.674600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.675079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.675094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.675495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.675509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.675961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.675974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.676446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.676460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.676948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.676961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.677431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.677445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.677945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.677959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.314 [2024-07-24 20:02:50.678437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.678451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:59.314 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.314 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.314 [2024-07-24 20:02:50.678918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.678932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.679432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.679446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.314 qpair failed and we were unable to recover it. 00:26:59.314 [2024-07-24 20:02:50.679887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.314 [2024-07-24 20:02:50.679901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.680304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.680318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.680727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.680741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.681214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.681228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.681650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.681663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.682115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.682129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.682612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.682626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.683119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.683133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.683603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.683617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.684034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.684052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.684502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.684516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.684965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.684978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.685450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.685464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.685884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.685900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.686349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.686363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.315 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.315 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.315 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.315 [2024-07-24 20:02:50.686761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.686775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.687225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.687239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.687639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.687653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.688132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.688147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.688599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.688612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.689006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.689019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.689418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.689432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.689912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.689926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.690397] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.315 [2024-07-24 20:02:50.690426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.315 [2024-07-24 20:02:50.690440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb2e8000b90 with addr=10.0.0.2, port=4420 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 [2024-07-24 20:02:50.692853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.315 [2024-07-24 20:02:50.693056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.315 [2024-07-24 20:02:50.693084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.315 [2024-07-24 20:02:50.693096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.315 [2024-07-24 20:02:50.693104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.315 [2024-07-24 20:02:50.693134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.315 qpair failed and we were unable to recover it. 00:26:59.315 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.315 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:59.315 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.315 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.316 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.316 20:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2206311 00:26:59.316 [2024-07-24 20:02:50.702806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.316 [2024-07-24 20:02:50.702957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.316 [2024-07-24 20:02:50.702976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.316 [2024-07-24 20:02:50.702984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.316 [2024-07-24 20:02:50.702990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.316 [2024-07-24 20:02:50.703010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.316 qpair failed and we were unable to recover it. 00:26:59.316 [2024-07-24 20:02:50.712824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.316 [2024-07-24 20:02:50.712963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.316 [2024-07-24 20:02:50.712981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.316 [2024-07-24 20:02:50.712989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.316 [2024-07-24 20:02:50.712995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.316 [2024-07-24 20:02:50.713013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.316 qpair failed and we were unable to recover it. 00:26:59.316 [2024-07-24 20:02:50.722814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.316 [2024-07-24 20:02:50.722976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.316 [2024-07-24 20:02:50.722993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.316 [2024-07-24 20:02:50.723000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.316 [2024-07-24 20:02:50.723010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.316 [2024-07-24 20:02:50.723027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.316 qpair failed and we were unable to recover it. 00:26:59.316 [2024-07-24 20:02:50.732759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.316 [2024-07-24 20:02:50.732900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.316 [2024-07-24 20:02:50.732918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.316 [2024-07-24 20:02:50.732924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.316 [2024-07-24 20:02:50.732931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.316 [2024-07-24 20:02:50.732949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.316 qpair failed and we were unable to recover it. 00:26:59.316 [2024-07-24 20:02:50.742829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.316 [2024-07-24 20:02:50.742967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.316 [2024-07-24 20:02:50.742987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.316 [2024-07-24 20:02:50.742995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.316 [2024-07-24 20:02:50.743001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.316 [2024-07-24 20:02:50.743019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.316 qpair failed and we were unable to recover it. 00:26:59.316 [2024-07-24 20:02:50.752806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.316 [2024-07-24 20:02:50.752942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.316 [2024-07-24 20:02:50.752959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.316 [2024-07-24 20:02:50.752966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.316 [2024-07-24 20:02:50.752972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.316 [2024-07-24 20:02:50.752990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.316 qpair failed and we were unable to recover it. 00:26:59.316 [2024-07-24 20:02:50.762870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.316 [2024-07-24 20:02:50.763009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.316 [2024-07-24 20:02:50.763026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.316 [2024-07-24 20:02:50.763033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.316 [2024-07-24 20:02:50.763038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.316 [2024-07-24 20:02:50.763061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.316 qpair failed and we were unable to recover it. 00:26:59.316 [2024-07-24 20:02:50.772897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.316 [2024-07-24 20:02:50.773038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.316 [2024-07-24 20:02:50.773060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.316 [2024-07-24 20:02:50.773067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.316 [2024-07-24 20:02:50.773073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.316 [2024-07-24 20:02:50.773090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.316 qpair failed and we were unable to recover it. 00:26:59.316 [2024-07-24 20:02:50.782915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.316 [2024-07-24 20:02:50.783056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.316 [2024-07-24 20:02:50.783074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.316 [2024-07-24 20:02:50.783081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.316 [2024-07-24 20:02:50.783086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.316 [2024-07-24 20:02:50.783103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.316 qpair failed and we were unable to recover it. 00:26:59.316 [2024-07-24 20:02:50.792961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.316 [2024-07-24 20:02:50.793126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.316 [2024-07-24 20:02:50.793142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.316 [2024-07-24 20:02:50.793149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.316 [2024-07-24 20:02:50.793155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.316 [2024-07-24 20:02:50.793172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.316 qpair failed and we were unable to recover it. 00:26:59.316 [2024-07-24 20:02:50.802980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.316 [2024-07-24 20:02:50.803120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.316 [2024-07-24 20:02:50.803136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.316 [2024-07-24 20:02:50.803143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.316 [2024-07-24 20:02:50.803149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.316 [2024-07-24 20:02:50.803166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.316 qpair failed and we were unable to recover it. 00:26:59.316 [2024-07-24 20:02:50.813016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.316 [2024-07-24 20:02:50.813168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.316 [2024-07-24 20:02:50.813184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.316 [2024-07-24 20:02:50.813195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.316 [2024-07-24 20:02:50.813201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.316 [2024-07-24 20:02:50.813217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.316 qpair failed and we were unable to recover it. 00:26:59.316 [2024-07-24 20:02:50.823051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.316 [2024-07-24 20:02:50.823198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.316 [2024-07-24 20:02:50.823215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.317 [2024-07-24 20:02:50.823221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.317 [2024-07-24 20:02:50.823227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.317 [2024-07-24 20:02:50.823244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.317 qpair failed and we were unable to recover it. 00:26:59.317 [2024-07-24 20:02:50.833120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.317 [2024-07-24 20:02:50.833287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.317 [2024-07-24 20:02:50.833304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.317 [2024-07-24 20:02:50.833310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.317 [2024-07-24 20:02:50.833316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.317 [2024-07-24 20:02:50.833333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.317 qpair failed and we were unable to recover it. 00:26:59.317 [2024-07-24 20:02:50.843103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.317 [2024-07-24 20:02:50.843237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.317 [2024-07-24 20:02:50.843253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.317 [2024-07-24 20:02:50.843260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.317 [2024-07-24 20:02:50.843266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.317 [2024-07-24 20:02:50.843283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.317 qpair failed and we were unable to recover it. 00:26:59.317 [2024-07-24 20:02:50.853128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.317 [2024-07-24 20:02:50.853267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.317 [2024-07-24 20:02:50.853284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.317 [2024-07-24 20:02:50.853291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.317 [2024-07-24 20:02:50.853297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.317 [2024-07-24 20:02:50.853314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.317 qpair failed and we were unable to recover it. 00:26:59.317 [2024-07-24 20:02:50.863151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.317 [2024-07-24 20:02:50.863286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.317 [2024-07-24 20:02:50.863303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.317 [2024-07-24 20:02:50.863310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.317 [2024-07-24 20:02:50.863315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.317 [2024-07-24 20:02:50.863332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.317 qpair failed and we were unable to recover it. 00:26:59.317 [2024-07-24 20:02:50.873179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.317 [2024-07-24 20:02:50.873314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.317 [2024-07-24 20:02:50.873330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.317 [2024-07-24 20:02:50.873337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.317 [2024-07-24 20:02:50.873343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.317 [2024-07-24 20:02:50.873359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.317 qpair failed and we were unable to recover it. 00:26:59.317 [2024-07-24 20:02:50.883212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.317 [2024-07-24 20:02:50.883348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.317 [2024-07-24 20:02:50.883364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.317 [2024-07-24 20:02:50.883371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.317 [2024-07-24 20:02:50.883376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.317 [2024-07-24 20:02:50.883392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.317 qpair failed and we were unable to recover it. 00:26:59.317 [2024-07-24 20:02:50.893250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.317 [2024-07-24 20:02:50.893391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.317 [2024-07-24 20:02:50.893408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.317 [2024-07-24 20:02:50.893414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.317 [2024-07-24 20:02:50.893420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.317 [2024-07-24 20:02:50.893437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.317 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-24 20:02:50.903255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-24 20:02:50.903438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-24 20:02:50.903456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-24 20:02:50.903466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-24 20:02:50.903472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.579 [2024-07-24 20:02:50.903489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-24 20:02:50.913239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-24 20:02:50.913378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-24 20:02:50.913395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-24 20:02:50.913402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-24 20:02:50.913407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.579 [2024-07-24 20:02:50.913424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-24 20:02:50.923321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-24 20:02:50.923460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-24 20:02:50.923477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-24 20:02:50.923484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-24 20:02:50.923490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.579 [2024-07-24 20:02:50.923507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-24 20:02:50.933365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-24 20:02:50.933502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-24 20:02:50.933520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-24 20:02:50.933526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-24 20:02:50.933532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.579 [2024-07-24 20:02:50.933549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-24 20:02:50.943558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-24 20:02:50.943730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-24 20:02:50.943747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-24 20:02:50.943754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-24 20:02:50.943759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.579 [2024-07-24 20:02:50.943777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-24 20:02:50.953481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-24 20:02:50.953620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-24 20:02:50.953638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-24 20:02:50.953645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-24 20:02:50.953651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.579 [2024-07-24 20:02:50.953668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-24 20:02:50.963431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-24 20:02:50.963754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-24 20:02:50.963771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-24 20:02:50.963778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-24 20:02:50.963784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.579 [2024-07-24 20:02:50.963801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-24 20:02:50.973511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-24 20:02:50.973659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-24 20:02:50.973675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-24 20:02:50.973682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-24 20:02:50.973688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.579 [2024-07-24 20:02:50.973705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-24 20:02:50.983530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-24 20:02:50.983662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-24 20:02:50.983679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-24 20:02:50.983686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-24 20:02:50.983692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.579 [2024-07-24 20:02:50.983709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-24 20:02:50.993534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-24 20:02:50.993708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.579 [2024-07-24 20:02:50.993728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.579 [2024-07-24 20:02:50.993735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.579 [2024-07-24 20:02:50.993741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.579 [2024-07-24 20:02:50.993758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.579 qpair failed and we were unable to recover it. 00:26:59.579 [2024-07-24 20:02:51.003492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.579 [2024-07-24 20:02:51.003629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-24 20:02:51.003646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-24 20:02:51.003653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-24 20:02:51.003658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.580 [2024-07-24 20:02:51.003675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-24 20:02:51.013586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-24 20:02:51.013721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-24 20:02:51.013737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-24 20:02:51.013744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-24 20:02:51.013750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.580 [2024-07-24 20:02:51.013766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-24 20:02:51.023538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-24 20:02:51.023675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-24 20:02:51.023691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-24 20:02:51.023698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-24 20:02:51.023704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.580 [2024-07-24 20:02:51.023721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-24 20:02:51.033581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-24 20:02:51.033719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-24 20:02:51.033735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-24 20:02:51.033742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-24 20:02:51.033748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.580 [2024-07-24 20:02:51.033768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-24 20:02:51.043605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-24 20:02:51.043740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-24 20:02:51.043758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-24 20:02:51.043765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-24 20:02:51.043770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.580 [2024-07-24 20:02:51.043787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-24 20:02:51.053657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-24 20:02:51.053794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-24 20:02:51.053810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-24 20:02:51.053817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-24 20:02:51.053823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.580 [2024-07-24 20:02:51.053839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-24 20:02:51.063760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-24 20:02:51.063902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-24 20:02:51.063919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-24 20:02:51.063925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-24 20:02:51.063931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.580 [2024-07-24 20:02:51.063948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-24 20:02:51.073732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-24 20:02:51.073864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-24 20:02:51.073881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-24 20:02:51.073888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-24 20:02:51.073893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.580 [2024-07-24 20:02:51.073910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-24 20:02:51.083805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-24 20:02:51.083941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-24 20:02:51.083963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-24 20:02:51.083970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-24 20:02:51.083977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.580 [2024-07-24 20:02:51.083994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-24 20:02:51.093732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-24 20:02:51.093872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-24 20:02:51.093889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-24 20:02:51.093895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-24 20:02:51.093902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.580 [2024-07-24 20:02:51.093919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-24 20:02:51.103887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-24 20:02:51.104071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-24 20:02:51.104088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-24 20:02:51.104095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-24 20:02:51.104101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.580 [2024-07-24 20:02:51.104118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-24 20:02:51.113846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-24 20:02:51.113980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-24 20:02:51.113996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-24 20:02:51.114003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-24 20:02:51.114009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.580 [2024-07-24 20:02:51.114025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.580 [2024-07-24 20:02:51.123891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.580 [2024-07-24 20:02:51.124025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.580 [2024-07-24 20:02:51.124047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.580 [2024-07-24 20:02:51.124055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.580 [2024-07-24 20:02:51.124064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.580 [2024-07-24 20:02:51.124081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.580 qpair failed and we were unable to recover it. 00:26:59.581 [2024-07-24 20:02:51.133859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.581 [2024-07-24 20:02:51.134000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.581 [2024-07-24 20:02:51.134017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.581 [2024-07-24 20:02:51.134024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.581 [2024-07-24 20:02:51.134030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.581 [2024-07-24 20:02:51.134052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.581 qpair failed and we were unable to recover it. 00:26:59.581 [2024-07-24 20:02:51.143868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.581 [2024-07-24 20:02:51.144004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.581 [2024-07-24 20:02:51.144020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.581 [2024-07-24 20:02:51.144027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.581 [2024-07-24 20:02:51.144033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.581 [2024-07-24 20:02:51.144056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.581 qpair failed and we were unable to recover it. 00:26:59.581 [2024-07-24 20:02:51.153970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.581 [2024-07-24 20:02:51.154111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.581 [2024-07-24 20:02:51.154128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.581 [2024-07-24 20:02:51.154135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.581 [2024-07-24 20:02:51.154141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.581 [2024-07-24 20:02:51.154159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.581 qpair failed and we were unable to recover it. 00:26:59.581 [2024-07-24 20:02:51.164013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.581 [2024-07-24 20:02:51.164155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.581 [2024-07-24 20:02:51.164172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.581 [2024-07-24 20:02:51.164178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.581 [2024-07-24 20:02:51.164184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.581 [2024-07-24 20:02:51.164201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.581 qpair failed and we were unable to recover it. 00:26:59.581 [2024-07-24 20:02:51.174062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.841 [2024-07-24 20:02:51.174203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.841 [2024-07-24 20:02:51.174220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.841 [2024-07-24 20:02:51.174227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.841 [2024-07-24 20:02:51.174232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.841 [2024-07-24 20:02:51.174249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.841 qpair failed and we were unable to recover it. 00:26:59.841 [2024-07-24 20:02:51.184081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.841 [2024-07-24 20:02:51.184399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.841 [2024-07-24 20:02:51.184417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.841 [2024-07-24 20:02:51.184424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.841 [2024-07-24 20:02:51.184430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.841 [2024-07-24 20:02:51.184446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.841 qpair failed and we were unable to recover it. 00:26:59.841 [2024-07-24 20:02:51.194127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.841 [2024-07-24 20:02:51.194293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.841 [2024-07-24 20:02:51.194311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.841 [2024-07-24 20:02:51.194318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.841 [2024-07-24 20:02:51.194324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.841 [2024-07-24 20:02:51.194342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-24 20:02:51.204059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-24 20:02:51.204203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-24 20:02:51.204220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-24 20:02:51.204227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-24 20:02:51.204233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.842 [2024-07-24 20:02:51.204250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-24 20:02:51.214139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-24 20:02:51.214281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-24 20:02:51.214297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-24 20:02:51.214308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-24 20:02:51.214313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.842 [2024-07-24 20:02:51.214330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-24 20:02:51.224227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-24 20:02:51.224364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-24 20:02:51.224381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-24 20:02:51.224388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-24 20:02:51.224393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.842 [2024-07-24 20:02:51.224410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-24 20:02:51.234174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-24 20:02:51.234308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-24 20:02:51.234324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-24 20:02:51.234331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-24 20:02:51.234337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.842 [2024-07-24 20:02:51.234354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-24 20:02:51.244283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-24 20:02:51.244426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-24 20:02:51.244443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-24 20:02:51.244450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-24 20:02:51.244456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.842 [2024-07-24 20:02:51.244472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-24 20:02:51.254179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-24 20:02:51.254313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-24 20:02:51.254330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-24 20:02:51.254337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-24 20:02:51.254343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.842 [2024-07-24 20:02:51.254359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-24 20:02:51.264258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-24 20:02:51.264390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-24 20:02:51.264407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-24 20:02:51.264415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-24 20:02:51.264421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.842 [2024-07-24 20:02:51.264438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-24 20:02:51.274355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-24 20:02:51.274490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-24 20:02:51.274507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-24 20:02:51.274514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-24 20:02:51.274519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.842 [2024-07-24 20:02:51.274537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-24 20:02:51.284339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-24 20:02:51.284479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-24 20:02:51.284496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-24 20:02:51.284503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-24 20:02:51.284509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.842 [2024-07-24 20:02:51.284526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-24 20:02:51.294371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-24 20:02:51.294512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-24 20:02:51.294529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-24 20:02:51.294536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-24 20:02:51.294541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.842 [2024-07-24 20:02:51.294558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-24 20:02:51.304448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-24 20:02:51.304615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-24 20:02:51.304632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-24 20:02:51.304643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-24 20:02:51.304649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.842 [2024-07-24 20:02:51.304666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-24 20:02:51.314346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-24 20:02:51.314481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-24 20:02:51.314498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-24 20:02:51.314505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-24 20:02:51.314511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.842 [2024-07-24 20:02:51.314528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.842 qpair failed and we were unable to recover it. 00:26:59.842 [2024-07-24 20:02:51.324462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.842 [2024-07-24 20:02:51.324598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.842 [2024-07-24 20:02:51.324614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.842 [2024-07-24 20:02:51.324621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.842 [2024-07-24 20:02:51.324627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.843 [2024-07-24 20:02:51.324644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-24 20:02:51.334462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-24 20:02:51.334605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-24 20:02:51.334622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-24 20:02:51.334628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-24 20:02:51.334634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.843 [2024-07-24 20:02:51.334651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-24 20:02:51.344444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-24 20:02:51.344583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-24 20:02:51.344599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-24 20:02:51.344607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-24 20:02:51.344612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.843 [2024-07-24 20:02:51.344629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-24 20:02:51.354546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-24 20:02:51.354680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-24 20:02:51.354697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-24 20:02:51.354704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-24 20:02:51.354709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.843 [2024-07-24 20:02:51.354726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-24 20:02:51.364598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-24 20:02:51.364731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-24 20:02:51.364748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-24 20:02:51.364755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-24 20:02:51.364761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.843 [2024-07-24 20:02:51.364777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-24 20:02:51.374611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-24 20:02:51.374751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-24 20:02:51.374768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-24 20:02:51.374775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-24 20:02:51.374781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.843 [2024-07-24 20:02:51.374798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-24 20:02:51.384738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-24 20:02:51.385051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-24 20:02:51.385068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-24 20:02:51.385075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-24 20:02:51.385081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.843 [2024-07-24 20:02:51.385097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-24 20:02:51.394742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-24 20:02:51.394895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-24 20:02:51.394915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-24 20:02:51.394922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-24 20:02:51.394927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.843 [2024-07-24 20:02:51.394944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-24 20:02:51.404717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-24 20:02:51.404856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-24 20:02:51.404873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-24 20:02:51.404879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-24 20:02:51.404885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.843 [2024-07-24 20:02:51.404902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-24 20:02:51.414649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-24 20:02:51.414784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-24 20:02:51.414801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-24 20:02:51.414807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-24 20:02:51.414813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.843 [2024-07-24 20:02:51.414830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-24 20:02:51.424754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-24 20:02:51.424887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-24 20:02:51.424904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-24 20:02:51.424911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-24 20:02:51.424916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.843 [2024-07-24 20:02:51.424933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.843 qpair failed and we were unable to recover it. 00:26:59.843 [2024-07-24 20:02:51.434826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.843 [2024-07-24 20:02:51.434955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.843 [2024-07-24 20:02:51.434971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.843 [2024-07-24 20:02:51.434978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.843 [2024-07-24 20:02:51.434984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:26:59.843 [2024-07-24 20:02:51.435004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.843 qpair failed and we were unable to recover it. 00:27:00.103 [2024-07-24 20:02:51.444812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.103 [2024-07-24 20:02:51.444947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.103 [2024-07-24 20:02:51.444964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.103 [2024-07-24 20:02:51.444971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.103 [2024-07-24 20:02:51.444977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.103 [2024-07-24 20:02:51.444993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.103 qpair failed and we were unable to recover it. 00:27:00.103 [2024-07-24 20:02:51.454828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.103 [2024-07-24 20:02:51.454966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.103 [2024-07-24 20:02:51.454983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.103 [2024-07-24 20:02:51.454990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.104 [2024-07-24 20:02:51.454995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.104 [2024-07-24 20:02:51.455012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.104 qpair failed and we were unable to recover it. 00:27:00.104 [2024-07-24 20:02:51.464856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.104 [2024-07-24 20:02:51.464988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.104 [2024-07-24 20:02:51.465005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.104 [2024-07-24 20:02:51.465012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.104 [2024-07-24 20:02:51.465017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.104 [2024-07-24 20:02:51.465034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.104 qpair failed and we were unable to recover it. 00:27:00.104 [2024-07-24 20:02:51.474895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.104 [2024-07-24 20:02:51.475033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.104 [2024-07-24 20:02:51.475055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.104 [2024-07-24 20:02:51.475062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.104 [2024-07-24 20:02:51.475068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.104 [2024-07-24 20:02:51.475085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.104 qpair failed and we were unable to recover it. 00:27:00.104 [2024-07-24 20:02:51.484860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.104 [2024-07-24 20:02:51.484992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.104 [2024-07-24 20:02:51.485012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.104 [2024-07-24 20:02:51.485019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.104 [2024-07-24 20:02:51.485024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.104 [2024-07-24 20:02:51.485041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.104 qpair failed and we were unable to recover it. 00:27:00.104 [2024-07-24 20:02:51.494947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.104 [2024-07-24 20:02:51.495087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.104 [2024-07-24 20:02:51.495105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.104 [2024-07-24 20:02:51.495111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.104 [2024-07-24 20:02:51.495117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.104 [2024-07-24 20:02:51.495134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.104 qpair failed and we were unable to recover it. 00:27:00.104 [2024-07-24 20:02:51.504956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.104 [2024-07-24 20:02:51.505105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.104 [2024-07-24 20:02:51.505122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.104 [2024-07-24 20:02:51.505129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.104 [2024-07-24 20:02:51.505134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.104 [2024-07-24 20:02:51.505151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.104 qpair failed and we were unable to recover it. 00:27:00.104 [2024-07-24 20:02:51.514938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.104 [2024-07-24 20:02:51.515076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.104 [2024-07-24 20:02:51.515093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.104 [2024-07-24 20:02:51.515100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.104 [2024-07-24 20:02:51.515106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.104 [2024-07-24 20:02:51.515122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.104 qpair failed and we were unable to recover it. 00:27:00.104 [2024-07-24 20:02:51.525048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.104 [2024-07-24 20:02:51.525182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.104 [2024-07-24 20:02:51.525199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.104 [2024-07-24 20:02:51.525206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.104 [2024-07-24 20:02:51.525215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.104 [2024-07-24 20:02:51.525231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.104 qpair failed and we were unable to recover it. 00:27:00.104 [2024-07-24 20:02:51.535076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.104 [2024-07-24 20:02:51.535212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.104 [2024-07-24 20:02:51.535229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.104 [2024-07-24 20:02:51.535236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.104 [2024-07-24 20:02:51.535241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.104 [2024-07-24 20:02:51.535258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.104 qpair failed and we were unable to recover it. 00:27:00.104 [2024-07-24 20:02:51.545046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.104 [2024-07-24 20:02:51.545177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.104 [2024-07-24 20:02:51.545193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.104 [2024-07-24 20:02:51.545200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.104 [2024-07-24 20:02:51.545206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.104 [2024-07-24 20:02:51.545223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.104 qpair failed and we were unable to recover it. 00:27:00.104 [2024-07-24 20:02:51.555041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.104 [2024-07-24 20:02:51.555220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.104 [2024-07-24 20:02:51.555236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.104 [2024-07-24 20:02:51.555243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.104 [2024-07-24 20:02:51.555249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.104 [2024-07-24 20:02:51.555266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.104 qpair failed and we were unable to recover it. 00:27:00.104 [2024-07-24 20:02:51.565098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.104 [2024-07-24 20:02:51.565231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.104 [2024-07-24 20:02:51.565247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.104 [2024-07-24 20:02:51.565254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.104 [2024-07-24 20:02:51.565260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.104 [2024-07-24 20:02:51.565277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.104 qpair failed and we were unable to recover it. 00:27:00.104 [2024-07-24 20:02:51.575177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.104 [2024-07-24 20:02:51.575314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.104 [2024-07-24 20:02:51.575331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.104 [2024-07-24 20:02:51.575338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.104 [2024-07-24 20:02:51.575344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.104 [2024-07-24 20:02:51.575360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.104 qpair failed and we were unable to recover it. 00:27:00.104 [2024-07-24 20:02:51.585164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-24 20:02:51.585307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-24 20:02:51.585323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-24 20:02:51.585330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-24 20:02:51.585336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.105 [2024-07-24 20:02:51.585352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-24 20:02:51.595155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-24 20:02:51.595287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-24 20:02:51.595304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-24 20:02:51.595311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-24 20:02:51.595317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.105 [2024-07-24 20:02:51.595333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-24 20:02:51.605295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-24 20:02:51.605460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-24 20:02:51.605477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-24 20:02:51.605484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-24 20:02:51.605489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.105 [2024-07-24 20:02:51.605506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-24 20:02:51.615305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-24 20:02:51.615435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-24 20:02:51.615452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-24 20:02:51.615459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-24 20:02:51.615468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.105 [2024-07-24 20:02:51.615484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-24 20:02:51.625256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-24 20:02:51.625386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-24 20:02:51.625403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-24 20:02:51.625410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-24 20:02:51.625415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.105 [2024-07-24 20:02:51.625432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-24 20:02:51.635363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-24 20:02:51.635496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-24 20:02:51.635512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-24 20:02:51.635519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-24 20:02:51.635525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.105 [2024-07-24 20:02:51.635542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-24 20:02:51.645396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-24 20:02:51.645529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-24 20:02:51.645546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-24 20:02:51.645553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-24 20:02:51.645558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.105 [2024-07-24 20:02:51.645575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-24 20:02:51.655429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-24 20:02:51.655561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-24 20:02:51.655577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-24 20:02:51.655584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-24 20:02:51.655589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.105 [2024-07-24 20:02:51.655606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-24 20:02:51.665453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-24 20:02:51.665591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-24 20:02:51.665608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-24 20:02:51.665614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-24 20:02:51.665620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.105 [2024-07-24 20:02:51.665636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-24 20:02:51.675463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-24 20:02:51.675595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-24 20:02:51.675611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-24 20:02:51.675618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-24 20:02:51.675624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.105 [2024-07-24 20:02:51.675640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-24 20:02:51.685508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-24 20:02:51.685642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-24 20:02:51.685659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-24 20:02:51.685665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-24 20:02:51.685671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.105 [2024-07-24 20:02:51.685688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.105 [2024-07-24 20:02:51.695513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.105 [2024-07-24 20:02:51.695650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.105 [2024-07-24 20:02:51.695667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.105 [2024-07-24 20:02:51.695674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.105 [2024-07-24 20:02:51.695679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.105 [2024-07-24 20:02:51.695696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.105 qpair failed and we were unable to recover it. 00:27:00.366 [2024-07-24 20:02:51.705571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.366 [2024-07-24 20:02:51.705706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.366 [2024-07-24 20:02:51.705723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.366 [2024-07-24 20:02:51.705733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.366 [2024-07-24 20:02:51.705739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.366 [2024-07-24 20:02:51.705756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.366 qpair failed and we were unable to recover it. 00:27:00.366 [2024-07-24 20:02:51.715590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.366 [2024-07-24 20:02:51.715729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.366 [2024-07-24 20:02:51.715745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.366 [2024-07-24 20:02:51.715752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.366 [2024-07-24 20:02:51.715758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.366 [2024-07-24 20:02:51.715775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.366 qpair failed and we were unable to recover it. 00:27:00.366 [2024-07-24 20:02:51.725660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.366 [2024-07-24 20:02:51.725820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.366 [2024-07-24 20:02:51.725837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.366 [2024-07-24 20:02:51.725844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.366 [2024-07-24 20:02:51.725849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.366 [2024-07-24 20:02:51.725866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.366 qpair failed and we were unable to recover it. 00:27:00.366 [2024-07-24 20:02:51.735651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.366 [2024-07-24 20:02:51.735787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.366 [2024-07-24 20:02:51.735803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.366 [2024-07-24 20:02:51.735810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.366 [2024-07-24 20:02:51.735816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.366 [2024-07-24 20:02:51.735834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.366 qpair failed and we were unable to recover it. 00:27:00.366 [2024-07-24 20:02:51.745673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.367 [2024-07-24 20:02:51.745812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.367 [2024-07-24 20:02:51.745830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.367 [2024-07-24 20:02:51.745837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.367 [2024-07-24 20:02:51.745842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.367 [2024-07-24 20:02:51.745860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.367 qpair failed and we were unable to recover it. 00:27:00.367 [2024-07-24 20:02:51.755743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.367 [2024-07-24 20:02:51.755880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.367 [2024-07-24 20:02:51.755897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.367 [2024-07-24 20:02:51.755904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.367 [2024-07-24 20:02:51.755910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.367 [2024-07-24 20:02:51.755927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.367 qpair failed and we were unable to recover it. 00:27:00.367 [2024-07-24 20:02:51.765672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.367 [2024-07-24 20:02:51.765803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.367 [2024-07-24 20:02:51.765820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.367 [2024-07-24 20:02:51.765827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.367 [2024-07-24 20:02:51.765832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.367 [2024-07-24 20:02:51.765849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.367 qpair failed and we were unable to recover it. 00:27:00.367 [2024-07-24 20:02:51.775763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.367 [2024-07-24 20:02:51.775904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.367 [2024-07-24 20:02:51.775921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.367 [2024-07-24 20:02:51.775928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.367 [2024-07-24 20:02:51.775933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.367 [2024-07-24 20:02:51.775951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.367 qpair failed and we were unable to recover it. 00:27:00.367 [2024-07-24 20:02:51.785798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.367 [2024-07-24 20:02:51.785933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.367 [2024-07-24 20:02:51.785949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.367 [2024-07-24 20:02:51.785956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.367 [2024-07-24 20:02:51.785962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.367 [2024-07-24 20:02:51.785979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.367 qpair failed and we were unable to recover it. 00:27:00.367 [2024-07-24 20:02:51.795826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.367 [2024-07-24 20:02:51.795962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.367 [2024-07-24 20:02:51.795982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.367 [2024-07-24 20:02:51.795989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.367 [2024-07-24 20:02:51.795995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.367 [2024-07-24 20:02:51.796011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.367 qpair failed and we were unable to recover it. 00:27:00.367 [2024-07-24 20:02:51.805859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.367 [2024-07-24 20:02:51.805994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.367 [2024-07-24 20:02:51.806010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.367 [2024-07-24 20:02:51.806018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.367 [2024-07-24 20:02:51.806023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.367 [2024-07-24 20:02:51.806040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.367 qpair failed and we were unable to recover it. 00:27:00.367 [2024-07-24 20:02:51.815899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.367 [2024-07-24 20:02:51.816038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.367 [2024-07-24 20:02:51.816059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.367 [2024-07-24 20:02:51.816066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.367 [2024-07-24 20:02:51.816071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.367 [2024-07-24 20:02:51.816088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.367 qpair failed and we were unable to recover it. 00:27:00.367 [2024-07-24 20:02:51.825921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.367 [2024-07-24 20:02:51.826058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.367 [2024-07-24 20:02:51.826075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.367 [2024-07-24 20:02:51.826082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.367 [2024-07-24 20:02:51.826088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.367 [2024-07-24 20:02:51.826104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.367 qpair failed and we were unable to recover it. 00:27:00.367 [2024-07-24 20:02:51.835942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.367 [2024-07-24 20:02:51.836099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.367 [2024-07-24 20:02:51.836115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.367 [2024-07-24 20:02:51.836122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.367 [2024-07-24 20:02:51.836128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.367 [2024-07-24 20:02:51.836148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.367 qpair failed and we were unable to recover it. 00:27:00.367 [2024-07-24 20:02:51.845963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.367 [2024-07-24 20:02:51.846108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.367 [2024-07-24 20:02:51.846126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.367 [2024-07-24 20:02:51.846132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.367 [2024-07-24 20:02:51.846138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.367 [2024-07-24 20:02:51.846155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.367 qpair failed and we were unable to recover it. 00:27:00.367 [2024-07-24 20:02:51.855970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.367 [2024-07-24 20:02:51.856114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.367 [2024-07-24 20:02:51.856131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.367 [2024-07-24 20:02:51.856138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.367 [2024-07-24 20:02:51.856144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.367 [2024-07-24 20:02:51.856160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.367 qpair failed and we were unable to recover it. 00:27:00.367 [2024-07-24 20:02:51.866027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.367 [2024-07-24 20:02:51.866167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.367 [2024-07-24 20:02:51.866184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.367 [2024-07-24 20:02:51.866191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.367 [2024-07-24 20:02:51.866197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.367 [2024-07-24 20:02:51.866213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.367 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-24 20:02:51.876056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-24 20:02:51.876191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-24 20:02:51.876207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-24 20:02:51.876214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-24 20:02:51.876220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.368 [2024-07-24 20:02:51.876237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-24 20:02:51.886092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-24 20:02:51.886232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-24 20:02:51.886251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-24 20:02:51.886259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-24 20:02:51.886264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.368 [2024-07-24 20:02:51.886281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-24 20:02:51.896100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-24 20:02:51.896236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-24 20:02:51.896252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-24 20:02:51.896259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-24 20:02:51.896265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.368 [2024-07-24 20:02:51.896281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-24 20:02:51.906152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-24 20:02:51.906289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-24 20:02:51.906305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-24 20:02:51.906312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-24 20:02:51.906317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.368 [2024-07-24 20:02:51.906335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-24 20:02:51.916162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-24 20:02:51.916302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-24 20:02:51.916319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-24 20:02:51.916326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-24 20:02:51.916332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.368 [2024-07-24 20:02:51.916348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-24 20:02:51.926206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-24 20:02:51.926342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-24 20:02:51.926358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-24 20:02:51.926365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-24 20:02:51.926373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.368 [2024-07-24 20:02:51.926390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-24 20:02:51.936236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-24 20:02:51.936371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-24 20:02:51.936387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-24 20:02:51.936394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-24 20:02:51.936400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.368 [2024-07-24 20:02:51.936416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-24 20:02:51.946255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-24 20:02:51.946387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-24 20:02:51.946403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-24 20:02:51.946410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-24 20:02:51.946416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.368 [2024-07-24 20:02:51.946433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.368 [2024-07-24 20:02:51.956282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.368 [2024-07-24 20:02:51.956439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.368 [2024-07-24 20:02:51.956455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.368 [2024-07-24 20:02:51.956462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.368 [2024-07-24 20:02:51.956468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.368 [2024-07-24 20:02:51.956485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.368 qpair failed and we were unable to recover it. 00:27:00.629 [2024-07-24 20:02:51.966343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.629 [2024-07-24 20:02:51.966483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.629 [2024-07-24 20:02:51.966501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.629 [2024-07-24 20:02:51.966508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.629 [2024-07-24 20:02:51.966514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.629 [2024-07-24 20:02:51.966530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.629 qpair failed and we were unable to recover it. 00:27:00.629 [2024-07-24 20:02:51.976350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-24 20:02:51.976492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-24 20:02:51.976508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-24 20:02:51.976515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-24 20:02:51.976521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.630 [2024-07-24 20:02:51.976538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-24 20:02:51.986376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-24 20:02:51.986514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-24 20:02:51.986531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-24 20:02:51.986538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-24 20:02:51.986544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.630 [2024-07-24 20:02:51.986561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-24 20:02:51.996390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-24 20:02:51.996528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-24 20:02:51.996544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-24 20:02:51.996551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-24 20:02:51.996557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.630 [2024-07-24 20:02:51.996574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-24 20:02:52.006449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-24 20:02:52.006596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-24 20:02:52.006613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-24 20:02:52.006620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-24 20:02:52.006625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.630 [2024-07-24 20:02:52.006642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-24 20:02:52.016462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-24 20:02:52.016598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-24 20:02:52.016615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-24 20:02:52.016622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-24 20:02:52.016635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.630 [2024-07-24 20:02:52.016651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-24 20:02:52.026485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-24 20:02:52.026614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-24 20:02:52.026631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-24 20:02:52.026638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-24 20:02:52.026645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.630 [2024-07-24 20:02:52.026661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-24 20:02:52.036509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-24 20:02:52.036645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-24 20:02:52.036661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-24 20:02:52.036668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-24 20:02:52.036674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.630 [2024-07-24 20:02:52.036691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-24 20:02:52.046545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-24 20:02:52.046680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-24 20:02:52.046697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-24 20:02:52.046704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-24 20:02:52.046710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.630 [2024-07-24 20:02:52.046727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-24 20:02:52.056572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-24 20:02:52.056713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-24 20:02:52.056730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-24 20:02:52.056736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-24 20:02:52.056742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.630 [2024-07-24 20:02:52.056759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-24 20:02:52.066645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-24 20:02:52.066784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-24 20:02:52.066801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-24 20:02:52.066808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-24 20:02:52.066813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.630 [2024-07-24 20:02:52.066830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-24 20:02:52.076538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-24 20:02:52.076673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-24 20:02:52.076689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-24 20:02:52.076696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-24 20:02:52.076702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.630 [2024-07-24 20:02:52.076719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-24 20:02:52.086678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-24 20:02:52.086841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-24 20:02:52.086858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.630 [2024-07-24 20:02:52.086864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.630 [2024-07-24 20:02:52.086870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.630 [2024-07-24 20:02:52.086886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.630 qpair failed and we were unable to recover it. 00:27:00.630 [2024-07-24 20:02:52.096724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.630 [2024-07-24 20:02:52.096884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.630 [2024-07-24 20:02:52.096901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-24 20:02:52.096907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-24 20:02:52.096913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.631 [2024-07-24 20:02:52.096930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-24 20:02:52.106708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-24 20:02:52.106844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-24 20:02:52.106860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-24 20:02:52.106871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-24 20:02:52.106876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.631 [2024-07-24 20:02:52.106893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-24 20:02:52.116653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-24 20:02:52.116789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-24 20:02:52.116806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-24 20:02:52.116813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-24 20:02:52.116819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.631 [2024-07-24 20:02:52.116835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-24 20:02:52.126771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-24 20:02:52.126901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-24 20:02:52.126917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-24 20:02:52.126924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-24 20:02:52.126930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.631 [2024-07-24 20:02:52.126946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-24 20:02:52.136764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-24 20:02:52.136907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-24 20:02:52.136924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-24 20:02:52.136931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-24 20:02:52.136937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.631 [2024-07-24 20:02:52.136953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-24 20:02:52.146822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-24 20:02:52.146950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-24 20:02:52.146967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-24 20:02:52.146974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-24 20:02:52.146980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.631 [2024-07-24 20:02:52.146997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-24 20:02:52.156853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-24 20:02:52.156987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-24 20:02:52.157004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-24 20:02:52.157011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-24 20:02:52.157017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.631 [2024-07-24 20:02:52.157033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-24 20:02:52.166890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-24 20:02:52.167025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-24 20:02:52.167046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-24 20:02:52.167054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-24 20:02:52.167060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.631 [2024-07-24 20:02:52.167076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-24 20:02:52.176914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-24 20:02:52.177054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-24 20:02:52.177071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-24 20:02:52.177078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-24 20:02:52.177084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.631 [2024-07-24 20:02:52.177101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-24 20:02:52.186938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-24 20:02:52.187084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-24 20:02:52.187101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-24 20:02:52.187108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-24 20:02:52.187113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.631 [2024-07-24 20:02:52.187130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-24 20:02:52.196960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-24 20:02:52.197098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-24 20:02:52.197118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-24 20:02:52.197125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-24 20:02:52.197131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.631 [2024-07-24 20:02:52.197147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-24 20:02:52.207009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-24 20:02:52.207147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-24 20:02:52.207163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-24 20:02:52.207170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-24 20:02:52.207175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.631 [2024-07-24 20:02:52.207192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.631 [2024-07-24 20:02:52.217036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.631 [2024-07-24 20:02:52.217179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.631 [2024-07-24 20:02:52.217196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.631 [2024-07-24 20:02:52.217203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.631 [2024-07-24 20:02:52.217208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.631 [2024-07-24 20:02:52.217225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.631 qpair failed and we were unable to recover it. 00:27:00.893 [2024-07-24 20:02:52.226989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.893 [2024-07-24 20:02:52.227129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.893 [2024-07-24 20:02:52.227146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.893 [2024-07-24 20:02:52.227153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.893 [2024-07-24 20:02:52.227159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.893 [2024-07-24 20:02:52.227176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.893 qpair failed and we were unable to recover it. 00:27:00.893 [2024-07-24 20:02:52.237087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.893 [2024-07-24 20:02:52.237225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.893 [2024-07-24 20:02:52.237242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.893 [2024-07-24 20:02:52.237249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.893 [2024-07-24 20:02:52.237255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.893 [2024-07-24 20:02:52.237275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.893 qpair failed and we were unable to recover it. 00:27:00.893 [2024-07-24 20:02:52.247113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.893 [2024-07-24 20:02:52.247248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.893 [2024-07-24 20:02:52.247264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.893 [2024-07-24 20:02:52.247271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.893 [2024-07-24 20:02:52.247276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.893 [2024-07-24 20:02:52.247293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.893 qpair failed and we were unable to recover it. 00:27:00.893 [2024-07-24 20:02:52.257131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.893 [2024-07-24 20:02:52.257267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.893 [2024-07-24 20:02:52.257283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.893 [2024-07-24 20:02:52.257290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.893 [2024-07-24 20:02:52.257296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.893 [2024-07-24 20:02:52.257313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.893 qpair failed and we were unable to recover it. 00:27:00.893 [2024-07-24 20:02:52.267177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.893 [2024-07-24 20:02:52.267312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.893 [2024-07-24 20:02:52.267328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.893 [2024-07-24 20:02:52.267335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.893 [2024-07-24 20:02:52.267341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.893 [2024-07-24 20:02:52.267358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.893 qpair failed and we were unable to recover it. 00:27:00.893 [2024-07-24 20:02:52.277170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.893 [2024-07-24 20:02:52.277306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.893 [2024-07-24 20:02:52.277323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.893 [2024-07-24 20:02:52.277330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.893 [2024-07-24 20:02:52.277336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.893 [2024-07-24 20:02:52.277352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.893 qpair failed and we were unable to recover it. 00:27:00.893 [2024-07-24 20:02:52.287249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.893 [2024-07-24 20:02:52.287385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.893 [2024-07-24 20:02:52.287405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.893 [2024-07-24 20:02:52.287412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.893 [2024-07-24 20:02:52.287418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.893 [2024-07-24 20:02:52.287435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.893 qpair failed and we were unable to recover it. 00:27:00.893 [2024-07-24 20:02:52.297275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.893 [2024-07-24 20:02:52.297414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.893 [2024-07-24 20:02:52.297431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.893 [2024-07-24 20:02:52.297438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.894 [2024-07-24 20:02:52.297443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.894 [2024-07-24 20:02:52.297460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.894 qpair failed and we were unable to recover it. 00:27:00.894 [2024-07-24 20:02:52.307417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.894 [2024-07-24 20:02:52.307549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.894 [2024-07-24 20:02:52.307565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.894 [2024-07-24 20:02:52.307572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.894 [2024-07-24 20:02:52.307577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.894 [2024-07-24 20:02:52.307594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.894 qpair failed and we were unable to recover it. 00:27:00.894 [2024-07-24 20:02:52.317319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.894 [2024-07-24 20:02:52.317458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.894 [2024-07-24 20:02:52.317475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.894 [2024-07-24 20:02:52.317482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.894 [2024-07-24 20:02:52.317488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.894 [2024-07-24 20:02:52.317504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.894 qpair failed and we were unable to recover it. 00:27:00.894 [2024-07-24 20:02:52.327335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.894 [2024-07-24 20:02:52.327468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.894 [2024-07-24 20:02:52.327485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.894 [2024-07-24 20:02:52.327492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.894 [2024-07-24 20:02:52.327497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.894 [2024-07-24 20:02:52.327517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.894 qpair failed and we were unable to recover it. 00:27:00.894 [2024-07-24 20:02:52.337349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.894 [2024-07-24 20:02:52.337488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.894 [2024-07-24 20:02:52.337505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.894 [2024-07-24 20:02:52.337512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.894 [2024-07-24 20:02:52.337518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.894 [2024-07-24 20:02:52.337536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.894 qpair failed and we were unable to recover it. 00:27:00.894 [2024-07-24 20:02:52.347325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.894 [2024-07-24 20:02:52.347456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.894 [2024-07-24 20:02:52.347473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.894 [2024-07-24 20:02:52.347480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.894 [2024-07-24 20:02:52.347486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.894 [2024-07-24 20:02:52.347504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.894 qpair failed and we were unable to recover it. 00:27:00.894 [2024-07-24 20:02:52.357433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.894 [2024-07-24 20:02:52.357565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.894 [2024-07-24 20:02:52.357582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.894 [2024-07-24 20:02:52.357589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.894 [2024-07-24 20:02:52.357594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.894 [2024-07-24 20:02:52.357611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.894 qpair failed and we were unable to recover it. 00:27:00.894 [2024-07-24 20:02:52.367475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.894 [2024-07-24 20:02:52.367614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.894 [2024-07-24 20:02:52.367631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.894 [2024-07-24 20:02:52.367640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.894 [2024-07-24 20:02:52.367647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.894 [2024-07-24 20:02:52.367664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.894 qpair failed and we were unable to recover it. 00:27:00.894 [2024-07-24 20:02:52.377497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.894 [2024-07-24 20:02:52.377634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.894 [2024-07-24 20:02:52.377651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.894 [2024-07-24 20:02:52.377658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.894 [2024-07-24 20:02:52.377664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.894 [2024-07-24 20:02:52.377681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.894 qpair failed and we were unable to recover it. 00:27:00.894 [2024-07-24 20:02:52.387518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.894 [2024-07-24 20:02:52.387652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.894 [2024-07-24 20:02:52.387669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.894 [2024-07-24 20:02:52.387677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.894 [2024-07-24 20:02:52.387682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.894 [2024-07-24 20:02:52.387699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.894 qpair failed and we were unable to recover it. 00:27:00.894 [2024-07-24 20:02:52.397482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.894 [2024-07-24 20:02:52.397621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.894 [2024-07-24 20:02:52.397637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.894 [2024-07-24 20:02:52.397644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.894 [2024-07-24 20:02:52.397650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.894 [2024-07-24 20:02:52.397667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.894 qpair failed and we were unable to recover it. 00:27:00.894 [2024-07-24 20:02:52.407519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.894 [2024-07-24 20:02:52.407653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.894 [2024-07-24 20:02:52.407669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.894 [2024-07-24 20:02:52.407676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.894 [2024-07-24 20:02:52.407682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.894 [2024-07-24 20:02:52.407699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.894 qpair failed and we were unable to recover it. 00:27:00.894 [2024-07-24 20:02:52.417610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.894 [2024-07-24 20:02:52.417746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.894 [2024-07-24 20:02:52.417764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.894 [2024-07-24 20:02:52.417771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.894 [2024-07-24 20:02:52.417781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.894 [2024-07-24 20:02:52.417798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.894 qpair failed and we were unable to recover it. 00:27:00.894 [2024-07-24 20:02:52.427645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.894 [2024-07-24 20:02:52.427788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.895 [2024-07-24 20:02:52.427805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.895 [2024-07-24 20:02:52.427812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.895 [2024-07-24 20:02:52.427817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.895 [2024-07-24 20:02:52.427834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.895 qpair failed and we were unable to recover it. 00:27:00.895 [2024-07-24 20:02:52.437590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.895 [2024-07-24 20:02:52.437727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.895 [2024-07-24 20:02:52.437743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.895 [2024-07-24 20:02:52.437751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.895 [2024-07-24 20:02:52.437756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.895 [2024-07-24 20:02:52.437773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.895 qpair failed and we were unable to recover it. 00:27:00.895 [2024-07-24 20:02:52.447706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.895 [2024-07-24 20:02:52.447845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.895 [2024-07-24 20:02:52.447862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.895 [2024-07-24 20:02:52.447869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.895 [2024-07-24 20:02:52.447875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.895 [2024-07-24 20:02:52.447892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.895 qpair failed and we were unable to recover it. 00:27:00.895 [2024-07-24 20:02:52.457760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.895 [2024-07-24 20:02:52.457896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.895 [2024-07-24 20:02:52.457912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.895 [2024-07-24 20:02:52.457919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.895 [2024-07-24 20:02:52.457924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.895 [2024-07-24 20:02:52.457941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.895 qpair failed and we were unable to recover it. 00:27:00.895 [2024-07-24 20:02:52.467827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.895 [2024-07-24 20:02:52.467980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.895 [2024-07-24 20:02:52.467997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.895 [2024-07-24 20:02:52.468003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.895 [2024-07-24 20:02:52.468009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.895 [2024-07-24 20:02:52.468025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.895 qpair failed and we were unable to recover it. 00:27:00.895 [2024-07-24 20:02:52.477750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.895 [2024-07-24 20:02:52.477895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.895 [2024-07-24 20:02:52.477912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.895 [2024-07-24 20:02:52.477918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.895 [2024-07-24 20:02:52.477924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.895 [2024-07-24 20:02:52.477941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.895 qpair failed and we were unable to recover it. 00:27:00.895 [2024-07-24 20:02:52.487769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.895 [2024-07-24 20:02:52.487903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.895 [2024-07-24 20:02:52.487920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.895 [2024-07-24 20:02:52.487928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.895 [2024-07-24 20:02:52.487934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:00.895 [2024-07-24 20:02:52.487951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:00.895 qpair failed and we were unable to recover it. 00:27:01.156 [2024-07-24 20:02:52.497837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.156 [2024-07-24 20:02:52.497971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.156 [2024-07-24 20:02:52.497989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.156 [2024-07-24 20:02:52.497996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.156 [2024-07-24 20:02:52.498002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.156 [2024-07-24 20:02:52.498020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.156 qpair failed and we were unable to recover it. 00:27:01.156 [2024-07-24 20:02:52.507927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.156 [2024-07-24 20:02:52.508066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.156 [2024-07-24 20:02:52.508083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.156 [2024-07-24 20:02:52.508093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.156 [2024-07-24 20:02:52.508099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.156 [2024-07-24 20:02:52.508116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.156 qpair failed and we were unable to recover it. 00:27:01.156 [2024-07-24 20:02:52.517910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.156 [2024-07-24 20:02:52.518101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.156 [2024-07-24 20:02:52.518117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.156 [2024-07-24 20:02:52.518124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.156 [2024-07-24 20:02:52.518129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.156 [2024-07-24 20:02:52.518147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.156 qpair failed and we were unable to recover it. 00:27:01.156 [2024-07-24 20:02:52.527949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.156 [2024-07-24 20:02:52.528093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.156 [2024-07-24 20:02:52.528110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.156 [2024-07-24 20:02:52.528117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.156 [2024-07-24 20:02:52.528123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.156 [2024-07-24 20:02:52.528146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.156 qpair failed and we were unable to recover it. 00:27:01.156 [2024-07-24 20:02:52.537968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.156 [2024-07-24 20:02:52.538113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.156 [2024-07-24 20:02:52.538130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.156 [2024-07-24 20:02:52.538137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.156 [2024-07-24 20:02:52.538142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.156 [2024-07-24 20:02:52.538159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.156 qpair failed and we were unable to recover it. 00:27:01.156 [2024-07-24 20:02:52.547983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.156 [2024-07-24 20:02:52.548118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.156 [2024-07-24 20:02:52.548135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.156 [2024-07-24 20:02:52.548143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.156 [2024-07-24 20:02:52.548148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.156 [2024-07-24 20:02:52.548166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.156 qpair failed and we were unable to recover it. 00:27:01.156 [2024-07-24 20:02:52.558023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.156 [2024-07-24 20:02:52.558163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.156 [2024-07-24 20:02:52.558179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.156 [2024-07-24 20:02:52.558185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.156 [2024-07-24 20:02:52.558191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.156 [2024-07-24 20:02:52.558208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.156 qpair failed and we were unable to recover it. 00:27:01.156 [2024-07-24 20:02:52.568062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.156 [2024-07-24 20:02:52.568195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.156 [2024-07-24 20:02:52.568212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.156 [2024-07-24 20:02:52.568219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.156 [2024-07-24 20:02:52.568225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.156 [2024-07-24 20:02:52.568242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.156 qpair failed and we were unable to recover it. 00:27:01.156 [2024-07-24 20:02:52.578092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.157 [2024-07-24 20:02:52.578232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.157 [2024-07-24 20:02:52.578249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.157 [2024-07-24 20:02:52.578256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.157 [2024-07-24 20:02:52.578262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.157 [2024-07-24 20:02:52.578279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.157 qpair failed and we were unable to recover it. 00:27:01.157 [2024-07-24 20:02:52.588312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.157 [2024-07-24 20:02:52.588453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.157 [2024-07-24 20:02:52.588470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.157 [2024-07-24 20:02:52.588477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.157 [2024-07-24 20:02:52.588482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.157 [2024-07-24 20:02:52.588499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.157 qpair failed and we were unable to recover it. 00:27:01.157 [2024-07-24 20:02:52.598074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.157 [2024-07-24 20:02:52.598217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.157 [2024-07-24 20:02:52.598233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.157 [2024-07-24 20:02:52.598244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.157 [2024-07-24 20:02:52.598250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.157 [2024-07-24 20:02:52.598266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.157 qpair failed and we were unable to recover it. 00:27:01.157 [2024-07-24 20:02:52.608118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.157 [2024-07-24 20:02:52.608254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.157 [2024-07-24 20:02:52.608270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.157 [2024-07-24 20:02:52.608277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.157 [2024-07-24 20:02:52.608282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.157 [2024-07-24 20:02:52.608299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.157 qpair failed and we were unable to recover it. 00:27:01.157 [2024-07-24 20:02:52.618162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.157 [2024-07-24 20:02:52.618295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.157 [2024-07-24 20:02:52.618311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.157 [2024-07-24 20:02:52.618318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.157 [2024-07-24 20:02:52.618323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.157 [2024-07-24 20:02:52.618340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.157 qpair failed and we were unable to recover it. 00:27:01.157 [2024-07-24 20:02:52.628186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.157 [2024-07-24 20:02:52.628323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.157 [2024-07-24 20:02:52.628340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.157 [2024-07-24 20:02:52.628346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.157 [2024-07-24 20:02:52.628352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.157 [2024-07-24 20:02:52.628369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.157 qpair failed and we were unable to recover it. 00:27:01.157 [2024-07-24 20:02:52.638225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.157 [2024-07-24 20:02:52.638405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.157 [2024-07-24 20:02:52.638422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.157 [2024-07-24 20:02:52.638429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.157 [2024-07-24 20:02:52.638435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.157 [2024-07-24 20:02:52.638451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.157 qpair failed and we were unable to recover it. 00:27:01.157 [2024-07-24 20:02:52.648265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.157 [2024-07-24 20:02:52.648406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.157 [2024-07-24 20:02:52.648422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.157 [2024-07-24 20:02:52.648429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.157 [2024-07-24 20:02:52.648434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.157 [2024-07-24 20:02:52.648451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.157 qpair failed and we were unable to recover it. 00:27:01.157 [2024-07-24 20:02:52.658244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.157 [2024-07-24 20:02:52.658381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.157 [2024-07-24 20:02:52.658397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.157 [2024-07-24 20:02:52.658404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.157 [2024-07-24 20:02:52.658410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.157 [2024-07-24 20:02:52.658427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.157 qpair failed and we were unable to recover it. 00:27:01.157 [2024-07-24 20:02:52.668365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.157 [2024-07-24 20:02:52.668501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.157 [2024-07-24 20:02:52.668519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.157 [2024-07-24 20:02:52.668526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.157 [2024-07-24 20:02:52.668532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.157 [2024-07-24 20:02:52.668549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.157 qpair failed and we were unable to recover it. 00:27:01.157 [2024-07-24 20:02:52.678309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.157 [2024-07-24 20:02:52.678446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.157 [2024-07-24 20:02:52.678463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.157 [2024-07-24 20:02:52.678470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.157 [2024-07-24 20:02:52.678476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.157 [2024-07-24 20:02:52.678493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.157 qpair failed and we were unable to recover it. 00:27:01.157 [2024-07-24 20:02:52.688392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.157 [2024-07-24 20:02:52.688526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.157 [2024-07-24 20:02:52.688547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.157 [2024-07-24 20:02:52.688554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.157 [2024-07-24 20:02:52.688560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.157 [2024-07-24 20:02:52.688576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.157 qpair failed and we were unable to recover it. 00:27:01.157 [2024-07-24 20:02:52.698394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.157 [2024-07-24 20:02:52.698530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.157 [2024-07-24 20:02:52.698547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.157 [2024-07-24 20:02:52.698554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.157 [2024-07-24 20:02:52.698560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.158 [2024-07-24 20:02:52.698577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.158 qpair failed and we were unable to recover it. 00:27:01.158 [2024-07-24 20:02:52.708382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.158 [2024-07-24 20:02:52.708513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.158 [2024-07-24 20:02:52.708530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.158 [2024-07-24 20:02:52.708537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.158 [2024-07-24 20:02:52.708543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.158 [2024-07-24 20:02:52.708560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.158 qpair failed and we were unable to recover it. 00:27:01.158 [2024-07-24 20:02:52.718411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.158 [2024-07-24 20:02:52.718546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.158 [2024-07-24 20:02:52.718563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.158 [2024-07-24 20:02:52.718570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.158 [2024-07-24 20:02:52.718575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.158 [2024-07-24 20:02:52.718592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.158 qpair failed and we were unable to recover it. 00:27:01.158 [2024-07-24 20:02:52.728519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.158 [2024-07-24 20:02:52.728655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.158 [2024-07-24 20:02:52.728673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.158 [2024-07-24 20:02:52.728680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.158 [2024-07-24 20:02:52.728685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.158 [2024-07-24 20:02:52.728705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.158 qpair failed and we were unable to recover it. 00:27:01.158 [2024-07-24 20:02:52.738566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.158 [2024-07-24 20:02:52.738700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.158 [2024-07-24 20:02:52.738717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.158 [2024-07-24 20:02:52.738724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.158 [2024-07-24 20:02:52.738730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.158 [2024-07-24 20:02:52.738747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.158 qpair failed and we were unable to recover it. 00:27:01.158 [2024-07-24 20:02:52.748544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.158 [2024-07-24 20:02:52.748678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.158 [2024-07-24 20:02:52.748697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.158 [2024-07-24 20:02:52.748704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.158 [2024-07-24 20:02:52.748711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.158 [2024-07-24 20:02:52.748728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.158 qpair failed and we were unable to recover it. 00:27:01.419 [2024-07-24 20:02:52.758613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.419 [2024-07-24 20:02:52.758752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.419 [2024-07-24 20:02:52.758770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.419 [2024-07-24 20:02:52.758777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.419 [2024-07-24 20:02:52.758783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.419 [2024-07-24 20:02:52.758800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.419 qpair failed and we were unable to recover it. 00:27:01.419 [2024-07-24 20:02:52.768609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.419 [2024-07-24 20:02:52.768745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.419 [2024-07-24 20:02:52.768761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.419 [2024-07-24 20:02:52.768769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.419 [2024-07-24 20:02:52.768774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.419 [2024-07-24 20:02:52.768792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.419 qpair failed and we were unable to recover it. 00:27:01.419 [2024-07-24 20:02:52.778662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.419 [2024-07-24 20:02:52.778801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.419 [2024-07-24 20:02:52.778825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.419 [2024-07-24 20:02:52.778832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.419 [2024-07-24 20:02:52.778838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.419 [2024-07-24 20:02:52.778854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.419 qpair failed and we were unable to recover it. 00:27:01.419 [2024-07-24 20:02:52.788694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.419 [2024-07-24 20:02:52.788825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.419 [2024-07-24 20:02:52.788842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.419 [2024-07-24 20:02:52.788848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.419 [2024-07-24 20:02:52.788854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.419 [2024-07-24 20:02:52.788871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.419 qpair failed and we were unable to recover it. 00:27:01.419 [2024-07-24 20:02:52.798726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.419 [2024-07-24 20:02:52.798865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.420 [2024-07-24 20:02:52.798883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.420 [2024-07-24 20:02:52.798891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.420 [2024-07-24 20:02:52.798897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.420 [2024-07-24 20:02:52.798914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.420 qpair failed and we were unable to recover it. 00:27:01.420 [2024-07-24 20:02:52.808757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.420 [2024-07-24 20:02:52.808894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.420 [2024-07-24 20:02:52.808910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.420 [2024-07-24 20:02:52.808917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.420 [2024-07-24 20:02:52.808923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.420 [2024-07-24 20:02:52.808940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.420 qpair failed and we were unable to recover it. 00:27:01.420 [2024-07-24 20:02:52.818774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.420 [2024-07-24 20:02:52.818926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.420 [2024-07-24 20:02:52.818942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.420 [2024-07-24 20:02:52.818949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.420 [2024-07-24 20:02:52.818958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.420 [2024-07-24 20:02:52.818975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.420 qpair failed and we were unable to recover it. 00:27:01.420 [2024-07-24 20:02:52.828723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.420 [2024-07-24 20:02:52.828854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.420 [2024-07-24 20:02:52.828870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.420 [2024-07-24 20:02:52.828877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.420 [2024-07-24 20:02:52.828883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.420 [2024-07-24 20:02:52.828900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.420 qpair failed and we were unable to recover it. 00:27:01.420 [2024-07-24 20:02:52.838836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.420 [2024-07-24 20:02:52.838972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.420 [2024-07-24 20:02:52.838988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.420 [2024-07-24 20:02:52.838995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.420 [2024-07-24 20:02:52.839001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.420 [2024-07-24 20:02:52.839017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.420 qpair failed and we were unable to recover it. 00:27:01.420 [2024-07-24 20:02:52.848878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.420 [2024-07-24 20:02:52.849011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.420 [2024-07-24 20:02:52.849028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.420 [2024-07-24 20:02:52.849034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.420 [2024-07-24 20:02:52.849040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.420 [2024-07-24 20:02:52.849064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.420 qpair failed and we were unable to recover it. 00:27:01.420 [2024-07-24 20:02:52.858887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.420 [2024-07-24 20:02:52.859024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.420 [2024-07-24 20:02:52.859040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.420 [2024-07-24 20:02:52.859054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.420 [2024-07-24 20:02:52.859060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.420 [2024-07-24 20:02:52.859077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.420 qpair failed and we were unable to recover it. 00:27:01.420 [2024-07-24 20:02:52.868918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.420 [2024-07-24 20:02:52.869062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.420 [2024-07-24 20:02:52.869078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.420 [2024-07-24 20:02:52.869085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.420 [2024-07-24 20:02:52.869091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.420 [2024-07-24 20:02:52.869108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.420 qpair failed and we were unable to recover it. 00:27:01.420 [2024-07-24 20:02:52.878995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.420 [2024-07-24 20:02:52.879159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.420 [2024-07-24 20:02:52.879176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.420 [2024-07-24 20:02:52.879182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.420 [2024-07-24 20:02:52.879188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.420 [2024-07-24 20:02:52.879205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.420 qpair failed and we were unable to recover it. 00:27:01.420 [2024-07-24 20:02:52.888979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.420 [2024-07-24 20:02:52.889120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.420 [2024-07-24 20:02:52.889137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.420 [2024-07-24 20:02:52.889143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.420 [2024-07-24 20:02:52.889149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.420 [2024-07-24 20:02:52.889166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.420 qpair failed and we were unable to recover it. 00:27:01.420 [2024-07-24 20:02:52.899005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.420 [2024-07-24 20:02:52.899342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.420 [2024-07-24 20:02:52.899360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.420 [2024-07-24 20:02:52.899366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.420 [2024-07-24 20:02:52.899372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.420 [2024-07-24 20:02:52.899389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.420 qpair failed and we were unable to recover it. 00:27:01.420 [2024-07-24 20:02:52.909030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.420 [2024-07-24 20:02:52.909169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.420 [2024-07-24 20:02:52.909185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.420 [2024-07-24 20:02:52.909195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.420 [2024-07-24 20:02:52.909201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.420 [2024-07-24 20:02:52.909217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.421 qpair failed and we were unable to recover it. 00:27:01.421 [2024-07-24 20:02:52.919059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.421 [2024-07-24 20:02:52.919194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.421 [2024-07-24 20:02:52.919211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.421 [2024-07-24 20:02:52.919217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.421 [2024-07-24 20:02:52.919223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.421 [2024-07-24 20:02:52.919240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.421 qpair failed and we were unable to recover it. 00:27:01.421 [2024-07-24 20:02:52.929090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.421 [2024-07-24 20:02:52.929225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.421 [2024-07-24 20:02:52.929242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.421 [2024-07-24 20:02:52.929249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.421 [2024-07-24 20:02:52.929255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.421 [2024-07-24 20:02:52.929272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.421 qpair failed and we were unable to recover it. 00:27:01.421 [2024-07-24 20:02:52.939109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.421 [2024-07-24 20:02:52.939262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.421 [2024-07-24 20:02:52.939279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.421 [2024-07-24 20:02:52.939286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.421 [2024-07-24 20:02:52.939291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.421 [2024-07-24 20:02:52.939308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.421 qpair failed and we were unable to recover it. 00:27:01.421 [2024-07-24 20:02:52.949149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.421 [2024-07-24 20:02:52.949278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.421 [2024-07-24 20:02:52.949295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.421 [2024-07-24 20:02:52.949301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.421 [2024-07-24 20:02:52.949307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.421 [2024-07-24 20:02:52.949324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.421 qpair failed and we were unable to recover it. 00:27:01.421 [2024-07-24 20:02:52.959167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.421 [2024-07-24 20:02:52.959305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.421 [2024-07-24 20:02:52.959321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.421 [2024-07-24 20:02:52.959328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.421 [2024-07-24 20:02:52.959334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.421 [2024-07-24 20:02:52.959350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.421 qpair failed and we were unable to recover it. 00:27:01.421 [2024-07-24 20:02:52.969389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.421 [2024-07-24 20:02:52.969522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.421 [2024-07-24 20:02:52.969538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.421 [2024-07-24 20:02:52.969545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.421 [2024-07-24 20:02:52.969551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.421 [2024-07-24 20:02:52.969568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.421 qpair failed and we were unable to recover it. 00:27:01.421 [2024-07-24 20:02:52.979228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.421 [2024-07-24 20:02:52.979360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.421 [2024-07-24 20:02:52.979376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.421 [2024-07-24 20:02:52.979383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.421 [2024-07-24 20:02:52.979389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.421 [2024-07-24 20:02:52.979405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.421 qpair failed and we were unable to recover it. 00:27:01.421 [2024-07-24 20:02:52.989271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.421 [2024-07-24 20:02:52.989406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.421 [2024-07-24 20:02:52.989422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.421 [2024-07-24 20:02:52.989429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.421 [2024-07-24 20:02:52.989435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.421 [2024-07-24 20:02:52.989451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.421 qpair failed and we were unable to recover it. 00:27:01.421 [2024-07-24 20:02:52.999298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.421 [2024-07-24 20:02:52.999430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.421 [2024-07-24 20:02:52.999446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.421 [2024-07-24 20:02:52.999456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.421 [2024-07-24 20:02:52.999462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.421 [2024-07-24 20:02:52.999478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.421 qpair failed and we were unable to recover it. 00:27:01.421 [2024-07-24 20:02:53.009358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.421 [2024-07-24 20:02:53.009506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.421 [2024-07-24 20:02:53.009523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.421 [2024-07-24 20:02:53.009530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.421 [2024-07-24 20:02:53.009536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.421 [2024-07-24 20:02:53.009553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.421 qpair failed and we were unable to recover it. 00:27:01.682 [2024-07-24 20:02:53.019341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.682 [2024-07-24 20:02:53.019485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.682 [2024-07-24 20:02:53.019502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.682 [2024-07-24 20:02:53.019509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.682 [2024-07-24 20:02:53.019516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.682 [2024-07-24 20:02:53.019532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.682 qpair failed and we were unable to recover it. 00:27:01.682 [2024-07-24 20:02:53.029391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.682 [2024-07-24 20:02:53.029532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.682 [2024-07-24 20:02:53.029549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.682 [2024-07-24 20:02:53.029556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.682 [2024-07-24 20:02:53.029562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.682 [2024-07-24 20:02:53.029579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.682 qpair failed and we were unable to recover it. 00:27:01.682 [2024-07-24 20:02:53.039412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.682 [2024-07-24 20:02:53.039552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.682 [2024-07-24 20:02:53.039568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.682 [2024-07-24 20:02:53.039575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.682 [2024-07-24 20:02:53.039580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.682 [2024-07-24 20:02:53.039597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.682 qpair failed and we were unable to recover it. 00:27:01.682 [2024-07-24 20:02:53.049450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.682 [2024-07-24 20:02:53.049585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.682 [2024-07-24 20:02:53.049602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.682 [2024-07-24 20:02:53.049609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.682 [2024-07-24 20:02:53.049615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.682 [2024-07-24 20:02:53.049632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.682 qpair failed and we were unable to recover it. 00:27:01.682 [2024-07-24 20:02:53.059466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.682 [2024-07-24 20:02:53.059603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.682 [2024-07-24 20:02:53.059620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.682 [2024-07-24 20:02:53.059627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.682 [2024-07-24 20:02:53.059632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.682 [2024-07-24 20:02:53.059649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.682 qpair failed and we were unable to recover it. 00:27:01.682 [2024-07-24 20:02:53.069500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.682 [2024-07-24 20:02:53.069639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.682 [2024-07-24 20:02:53.069655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.682 [2024-07-24 20:02:53.069662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.682 [2024-07-24 20:02:53.069668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.682 [2024-07-24 20:02:53.069685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.682 qpair failed and we were unable to recover it. 00:27:01.682 [2024-07-24 20:02:53.079526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.682 [2024-07-24 20:02:53.079656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.682 [2024-07-24 20:02:53.079674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.683 [2024-07-24 20:02:53.079681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.683 [2024-07-24 20:02:53.079687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.683 [2024-07-24 20:02:53.079704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.683 qpair failed and we were unable to recover it. 00:27:01.683 [2024-07-24 20:02:53.089541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.683 [2024-07-24 20:02:53.089689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.683 [2024-07-24 20:02:53.089709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.683 [2024-07-24 20:02:53.089717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.683 [2024-07-24 20:02:53.089723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.683 [2024-07-24 20:02:53.089740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.683 qpair failed and we were unable to recover it. 00:27:01.683 [2024-07-24 20:02:53.099562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.683 [2024-07-24 20:02:53.099700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.683 [2024-07-24 20:02:53.099717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.683 [2024-07-24 20:02:53.099724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.683 [2024-07-24 20:02:53.099731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.683 [2024-07-24 20:02:53.099747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.683 qpair failed and we were unable to recover it. 00:27:01.683 [2024-07-24 20:02:53.109615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.683 [2024-07-24 20:02:53.109749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.683 [2024-07-24 20:02:53.109767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.683 [2024-07-24 20:02:53.109774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.683 [2024-07-24 20:02:53.109780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.683 [2024-07-24 20:02:53.109796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.683 qpair failed and we were unable to recover it. 00:27:01.683 [2024-07-24 20:02:53.119639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.683 [2024-07-24 20:02:53.119771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.683 [2024-07-24 20:02:53.119788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.683 [2024-07-24 20:02:53.119795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.683 [2024-07-24 20:02:53.119800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.683 [2024-07-24 20:02:53.119817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.683 qpair failed and we were unable to recover it. 00:27:01.683 [2024-07-24 20:02:53.129706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.683 [2024-07-24 20:02:53.129871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.683 [2024-07-24 20:02:53.129888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.683 [2024-07-24 20:02:53.129894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.683 [2024-07-24 20:02:53.129900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.683 [2024-07-24 20:02:53.129920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.683 qpair failed and we were unable to recover it. 00:27:01.683 [2024-07-24 20:02:53.139746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.683 [2024-07-24 20:02:53.139898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.683 [2024-07-24 20:02:53.139915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.683 [2024-07-24 20:02:53.139922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.683 [2024-07-24 20:02:53.139928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.683 [2024-07-24 20:02:53.139944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.683 qpair failed and we were unable to recover it. 00:27:01.683 [2024-07-24 20:02:53.149671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.683 [2024-07-24 20:02:53.149804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.683 [2024-07-24 20:02:53.149821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.683 [2024-07-24 20:02:53.149828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.683 [2024-07-24 20:02:53.149834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.683 [2024-07-24 20:02:53.149851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.683 qpair failed and we were unable to recover it. 00:27:01.683 [2024-07-24 20:02:53.159741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.683 [2024-07-24 20:02:53.159875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.683 [2024-07-24 20:02:53.159892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.683 [2024-07-24 20:02:53.159899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.683 [2024-07-24 20:02:53.159905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.683 [2024-07-24 20:02:53.159921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.683 qpair failed and we were unable to recover it. 00:27:01.683 [2024-07-24 20:02:53.169803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.683 [2024-07-24 20:02:53.169938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.683 [2024-07-24 20:02:53.169955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.683 [2024-07-24 20:02:53.169962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.683 [2024-07-24 20:02:53.169968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.683 [2024-07-24 20:02:53.169986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.683 qpair failed and we were unable to recover it. 00:27:01.683 [2024-07-24 20:02:53.179821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.683 [2024-07-24 20:02:53.179959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.683 [2024-07-24 20:02:53.179980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.683 [2024-07-24 20:02:53.179987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.683 [2024-07-24 20:02:53.179992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.683 [2024-07-24 20:02:53.180009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.683 qpair failed and we were unable to recover it. 00:27:01.683 [2024-07-24 20:02:53.189774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.683 [2024-07-24 20:02:53.189915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.683 [2024-07-24 20:02:53.189932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.683 [2024-07-24 20:02:53.189939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.683 [2024-07-24 20:02:53.189944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.683 [2024-07-24 20:02:53.189961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.683 qpair failed and we were unable to recover it. 00:27:01.683 [2024-07-24 20:02:53.199890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.683 [2024-07-24 20:02:53.200018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.683 [2024-07-24 20:02:53.200035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.683 [2024-07-24 20:02:53.200047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.683 [2024-07-24 20:02:53.200054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.683 [2024-07-24 20:02:53.200071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.683 qpair failed and we were unable to recover it. 00:27:01.683 [2024-07-24 20:02:53.209841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.683 [2024-07-24 20:02:53.210174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.684 [2024-07-24 20:02:53.210192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.684 [2024-07-24 20:02:53.210199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.684 [2024-07-24 20:02:53.210205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.684 [2024-07-24 20:02:53.210222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.684 qpair failed and we were unable to recover it. 00:27:01.684 [2024-07-24 20:02:53.219866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.684 [2024-07-24 20:02:53.220000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.684 [2024-07-24 20:02:53.220017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.684 [2024-07-24 20:02:53.220024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.684 [2024-07-24 20:02:53.220033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.684 [2024-07-24 20:02:53.220057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.684 qpair failed and we were unable to recover it. 00:27:01.684 [2024-07-24 20:02:53.229930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.684 [2024-07-24 20:02:53.230076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.684 [2024-07-24 20:02:53.230093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.684 [2024-07-24 20:02:53.230100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.684 [2024-07-24 20:02:53.230106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.684 [2024-07-24 20:02:53.230123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.684 qpair failed and we were unable to recover it. 00:27:01.684 [2024-07-24 20:02:53.239988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.684 [2024-07-24 20:02:53.240125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.684 [2024-07-24 20:02:53.240142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.684 [2024-07-24 20:02:53.240149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.684 [2024-07-24 20:02:53.240155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.684 [2024-07-24 20:02:53.240172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.684 qpair failed and we were unable to recover it. 00:27:01.684 [2024-07-24 20:02:53.250022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.684 [2024-07-24 20:02:53.250165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.684 [2024-07-24 20:02:53.250182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.684 [2024-07-24 20:02:53.250189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.684 [2024-07-24 20:02:53.250194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.684 [2024-07-24 20:02:53.250211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.684 qpair failed and we were unable to recover it. 00:27:01.684 [2024-07-24 20:02:53.260049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.684 [2024-07-24 20:02:53.260185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.684 [2024-07-24 20:02:53.260202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.684 [2024-07-24 20:02:53.260209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.684 [2024-07-24 20:02:53.260214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.684 [2024-07-24 20:02:53.260231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.684 qpair failed and we were unable to recover it. 00:27:01.684 [2024-07-24 20:02:53.270101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.684 [2024-07-24 20:02:53.270258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.684 [2024-07-24 20:02:53.270275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.684 [2024-07-24 20:02:53.270282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.684 [2024-07-24 20:02:53.270288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.684 [2024-07-24 20:02:53.270304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.684 qpair failed and we were unable to recover it. 00:27:01.959 [2024-07-24 20:02:53.280027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.959 [2024-07-24 20:02:53.280179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.959 [2024-07-24 20:02:53.280196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.959 [2024-07-24 20:02:53.280203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.959 [2024-07-24 20:02:53.280209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.959 [2024-07-24 20:02:53.280225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.959 qpair failed and we were unable to recover it. 00:27:01.959 [2024-07-24 20:02:53.290142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.959 [2024-07-24 20:02:53.290279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.959 [2024-07-24 20:02:53.290296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.959 [2024-07-24 20:02:53.290303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.959 [2024-07-24 20:02:53.290309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.959 [2024-07-24 20:02:53.290325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.959 qpair failed and we were unable to recover it. 00:27:01.959 [2024-07-24 20:02:53.300148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.959 [2024-07-24 20:02:53.300284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.959 [2024-07-24 20:02:53.300301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.959 [2024-07-24 20:02:53.300307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.959 [2024-07-24 20:02:53.300313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.959 [2024-07-24 20:02:53.300330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.959 qpair failed and we were unable to recover it. 00:27:01.959 [2024-07-24 20:02:53.310122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.959 [2024-07-24 20:02:53.310440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.959 [2024-07-24 20:02:53.310458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.960 [2024-07-24 20:02:53.310465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.960 [2024-07-24 20:02:53.310474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.960 [2024-07-24 20:02:53.310491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.960 qpair failed and we were unable to recover it. 00:27:01.960 [2024-07-24 20:02:53.320147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.960 [2024-07-24 20:02:53.320285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.960 [2024-07-24 20:02:53.320301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.960 [2024-07-24 20:02:53.320308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.960 [2024-07-24 20:02:53.320314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.960 [2024-07-24 20:02:53.320331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.960 qpair failed and we were unable to recover it. 00:27:01.960 [2024-07-24 20:02:53.330239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.960 [2024-07-24 20:02:53.330381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.960 [2024-07-24 20:02:53.330400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.960 [2024-07-24 20:02:53.330407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.960 [2024-07-24 20:02:53.330415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.960 [2024-07-24 20:02:53.330433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.960 qpair failed and we were unable to recover it. 00:27:01.960 [2024-07-24 20:02:53.340266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.960 [2024-07-24 20:02:53.340401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.960 [2024-07-24 20:02:53.340418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.960 [2024-07-24 20:02:53.340424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.960 [2024-07-24 20:02:53.340430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.960 [2024-07-24 20:02:53.340447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.960 qpair failed and we were unable to recover it. 00:27:01.960 [2024-07-24 20:02:53.350302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.960 [2024-07-24 20:02:53.350435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.960 [2024-07-24 20:02:53.350452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.960 [2024-07-24 20:02:53.350458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.960 [2024-07-24 20:02:53.350464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.960 [2024-07-24 20:02:53.350481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.960 qpair failed and we were unable to recover it. 00:27:01.960 [2024-07-24 20:02:53.360340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.960 [2024-07-24 20:02:53.360478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.960 [2024-07-24 20:02:53.360496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.960 [2024-07-24 20:02:53.360502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.960 [2024-07-24 20:02:53.360508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.960 [2024-07-24 20:02:53.360525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.960 qpair failed and we were unable to recover it. 00:27:01.960 [2024-07-24 20:02:53.370331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.960 [2024-07-24 20:02:53.370465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.960 [2024-07-24 20:02:53.370482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.960 [2024-07-24 20:02:53.370489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.960 [2024-07-24 20:02:53.370494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.961 [2024-07-24 20:02:53.370511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.961 qpair failed and we were unable to recover it. 00:27:01.961 [2024-07-24 20:02:53.380430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.961 [2024-07-24 20:02:53.380563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.961 [2024-07-24 20:02:53.380580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.961 [2024-07-24 20:02:53.380587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.961 [2024-07-24 20:02:53.380593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.961 [2024-07-24 20:02:53.380609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.961 qpair failed and we were unable to recover it. 00:27:01.961 [2024-07-24 20:02:53.390405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.961 [2024-07-24 20:02:53.390542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.961 [2024-07-24 20:02:53.390558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.961 [2024-07-24 20:02:53.390565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.961 [2024-07-24 20:02:53.390571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.961 [2024-07-24 20:02:53.390589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.961 qpair failed and we were unable to recover it. 00:27:01.961 [2024-07-24 20:02:53.400412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.961 [2024-07-24 20:02:53.400544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.961 [2024-07-24 20:02:53.400560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.961 [2024-07-24 20:02:53.400571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.961 [2024-07-24 20:02:53.400577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.961 [2024-07-24 20:02:53.400594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.961 qpair failed and we were unable to recover it. 00:27:01.961 [2024-07-24 20:02:53.410445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.961 [2024-07-24 20:02:53.410586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.961 [2024-07-24 20:02:53.410603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.961 [2024-07-24 20:02:53.410610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.961 [2024-07-24 20:02:53.410616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.961 [2024-07-24 20:02:53.410632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.961 qpair failed and we were unable to recover it. 00:27:01.961 [2024-07-24 20:02:53.420592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.961 [2024-07-24 20:02:53.420731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.961 [2024-07-24 20:02:53.420748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.961 [2024-07-24 20:02:53.420755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.961 [2024-07-24 20:02:53.420760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.961 [2024-07-24 20:02:53.420777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.961 qpair failed and we were unable to recover it. 00:27:01.961 [2024-07-24 20:02:53.430523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.961 [2024-07-24 20:02:53.430658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.961 [2024-07-24 20:02:53.430674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.961 [2024-07-24 20:02:53.430681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.962 [2024-07-24 20:02:53.430687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.962 [2024-07-24 20:02:53.430703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.962 qpair failed and we were unable to recover it. 00:27:01.962 [2024-07-24 20:02:53.440543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.962 [2024-07-24 20:02:53.440676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.962 [2024-07-24 20:02:53.440692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.962 [2024-07-24 20:02:53.440699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.962 [2024-07-24 20:02:53.440704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.962 [2024-07-24 20:02:53.440721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.962 qpair failed and we were unable to recover it. 00:27:01.962 [2024-07-24 20:02:53.450498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.962 [2024-07-24 20:02:53.450634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.962 [2024-07-24 20:02:53.450650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.962 [2024-07-24 20:02:53.450657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.962 [2024-07-24 20:02:53.450663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.962 [2024-07-24 20:02:53.450679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.962 qpair failed and we were unable to recover it. 00:27:01.962 [2024-07-24 20:02:53.460600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.962 [2024-07-24 20:02:53.460734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.962 [2024-07-24 20:02:53.460751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.962 [2024-07-24 20:02:53.460758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.962 [2024-07-24 20:02:53.460764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.962 [2024-07-24 20:02:53.460781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.962 qpair failed and we were unable to recover it. 00:27:01.962 [2024-07-24 20:02:53.470623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.962 [2024-07-24 20:02:53.470758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.962 [2024-07-24 20:02:53.470775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.962 [2024-07-24 20:02:53.470782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.962 [2024-07-24 20:02:53.470788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.962 [2024-07-24 20:02:53.470805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.962 qpair failed and we were unable to recover it. 00:27:01.962 [2024-07-24 20:02:53.480656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.962 [2024-07-24 20:02:53.480787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.962 [2024-07-24 20:02:53.480803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.962 [2024-07-24 20:02:53.480810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.962 [2024-07-24 20:02:53.480816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.962 [2024-07-24 20:02:53.480833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.962 qpair failed and we were unable to recover it. 00:27:01.962 [2024-07-24 20:02:53.490670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.962 [2024-07-24 20:02:53.490802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.962 [2024-07-24 20:02:53.490822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.962 [2024-07-24 20:02:53.490829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.962 [2024-07-24 20:02:53.490835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.963 [2024-07-24 20:02:53.490851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.963 qpair failed and we were unable to recover it. 00:27:01.963 [2024-07-24 20:02:53.500701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.963 [2024-07-24 20:02:53.500835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.963 [2024-07-24 20:02:53.500853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.963 [2024-07-24 20:02:53.500860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.963 [2024-07-24 20:02:53.500866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.963 [2024-07-24 20:02:53.500884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.963 qpair failed and we were unable to recover it. 00:27:01.963 [2024-07-24 20:02:53.510732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.963 [2024-07-24 20:02:53.510863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.963 [2024-07-24 20:02:53.510882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.963 [2024-07-24 20:02:53.510889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.963 [2024-07-24 20:02:53.510895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.963 [2024-07-24 20:02:53.510912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.963 qpair failed and we were unable to recover it. 00:27:01.963 [2024-07-24 20:02:53.520703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.963 [2024-07-24 20:02:53.520837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.963 [2024-07-24 20:02:53.520853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.963 [2024-07-24 20:02:53.520860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.963 [2024-07-24 20:02:53.520866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.963 [2024-07-24 20:02:53.520882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.963 qpair failed and we were unable to recover it. 00:27:01.963 [2024-07-24 20:02:53.530807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.963 [2024-07-24 20:02:53.530965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.963 [2024-07-24 20:02:53.530982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.963 [2024-07-24 20:02:53.530989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.963 [2024-07-24 20:02:53.530994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.963 [2024-07-24 20:02:53.531017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.963 qpair failed and we were unable to recover it. 00:27:01.963 [2024-07-24 20:02:53.540816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.963 [2024-07-24 20:02:53.540960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.963 [2024-07-24 20:02:53.540976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.963 [2024-07-24 20:02:53.540983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.963 [2024-07-24 20:02:53.540989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:01.963 [2024-07-24 20:02:53.541006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.963 qpair failed and we were unable to recover it. 00:27:02.249 [2024-07-24 20:02:53.550843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.249 [2024-07-24 20:02:53.550990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.249 [2024-07-24 20:02:53.551007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.249 [2024-07-24 20:02:53.551014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.249 [2024-07-24 20:02:53.551021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.249 [2024-07-24 20:02:53.551038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.249 qpair failed and we were unable to recover it. 00:27:02.249 [2024-07-24 20:02:53.560869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.249 [2024-07-24 20:02:53.561016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.249 [2024-07-24 20:02:53.561032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.249 [2024-07-24 20:02:53.561039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.249 [2024-07-24 20:02:53.561051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.249 [2024-07-24 20:02:53.561069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.249 qpair failed and we were unable to recover it. 00:27:02.249 [2024-07-24 20:02:53.570929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.249 [2024-07-24 20:02:53.571079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.249 [2024-07-24 20:02:53.571096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.249 [2024-07-24 20:02:53.571103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.249 [2024-07-24 20:02:53.571109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.249 [2024-07-24 20:02:53.571126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.249 qpair failed and we were unable to recover it. 00:27:02.249 [2024-07-24 20:02:53.580965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.249 [2024-07-24 20:02:53.581119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.249 [2024-07-24 20:02:53.581141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.249 [2024-07-24 20:02:53.581148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.249 [2024-07-24 20:02:53.581153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.249 [2024-07-24 20:02:53.581170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.249 qpair failed and we were unable to recover it. 00:27:02.249 [2024-07-24 20:02:53.590911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.249 [2024-07-24 20:02:53.591049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.249 [2024-07-24 20:02:53.591066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.249 [2024-07-24 20:02:53.591073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.249 [2024-07-24 20:02:53.591078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.249 [2024-07-24 20:02:53.591095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.249 qpair failed and we were unable to recover it. 00:27:02.249 [2024-07-24 20:02:53.600954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.249 [2024-07-24 20:02:53.601129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.249 [2024-07-24 20:02:53.601146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.249 [2024-07-24 20:02:53.601153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.249 [2024-07-24 20:02:53.601159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.249 [2024-07-24 20:02:53.601176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.249 qpair failed and we were unable to recover it. 00:27:02.249 [2024-07-24 20:02:53.611032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.249 [2024-07-24 20:02:53.611174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.249 [2024-07-24 20:02:53.611191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.249 [2024-07-24 20:02:53.611197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.249 [2024-07-24 20:02:53.611203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.249 [2024-07-24 20:02:53.611220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.249 qpair failed and we were unable to recover it. 00:27:02.249 [2024-07-24 20:02:53.621060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.249 [2024-07-24 20:02:53.621197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.249 [2024-07-24 20:02:53.621214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.249 [2024-07-24 20:02:53.621221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.249 [2024-07-24 20:02:53.621231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.249 [2024-07-24 20:02:53.621247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.249 qpair failed and we were unable to recover it. 00:27:02.249 [2024-07-24 20:02:53.631108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.250 [2024-07-24 20:02:53.631239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.250 [2024-07-24 20:02:53.631255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.250 [2024-07-24 20:02:53.631262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.250 [2024-07-24 20:02:53.631268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.250 [2024-07-24 20:02:53.631285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.250 qpair failed and we were unable to recover it. 00:27:02.250 [2024-07-24 20:02:53.641113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.250 [2024-07-24 20:02:53.641245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.250 [2024-07-24 20:02:53.641262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.250 [2024-07-24 20:02:53.641269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.250 [2024-07-24 20:02:53.641275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.250 [2024-07-24 20:02:53.641292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.250 qpair failed and we were unable to recover it. 00:27:02.250 [2024-07-24 20:02:53.651183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.250 [2024-07-24 20:02:53.651339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.250 [2024-07-24 20:02:53.651356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.250 [2024-07-24 20:02:53.651363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.250 [2024-07-24 20:02:53.651368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.250 [2024-07-24 20:02:53.651385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.250 qpair failed and we were unable to recover it. 00:27:02.250 [2024-07-24 20:02:53.661169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.250 [2024-07-24 20:02:53.661309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.250 [2024-07-24 20:02:53.661326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.250 [2024-07-24 20:02:53.661333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.250 [2024-07-24 20:02:53.661339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.250 [2024-07-24 20:02:53.661355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.250 qpair failed and we were unable to recover it. 00:27:02.250 [2024-07-24 20:02:53.671149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.250 [2024-07-24 20:02:53.671331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.250 [2024-07-24 20:02:53.671356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.250 [2024-07-24 20:02:53.671363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.250 [2024-07-24 20:02:53.671369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.250 [2024-07-24 20:02:53.671386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.250 qpair failed and we were unable to recover it. 00:27:02.250 [2024-07-24 20:02:53.681238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.250 [2024-07-24 20:02:53.681373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.250 [2024-07-24 20:02:53.681389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.250 [2024-07-24 20:02:53.681396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.250 [2024-07-24 20:02:53.681401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.250 [2024-07-24 20:02:53.681418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.250 qpair failed and we were unable to recover it. 00:27:02.250 [2024-07-24 20:02:53.691273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.250 [2024-07-24 20:02:53.691410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.250 [2024-07-24 20:02:53.691427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.250 [2024-07-24 20:02:53.691434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.250 [2024-07-24 20:02:53.691440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.250 [2024-07-24 20:02:53.691456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.250 qpair failed and we were unable to recover it. 00:27:02.250 [2024-07-24 20:02:53.701231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.250 [2024-07-24 20:02:53.701376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.250 [2024-07-24 20:02:53.701392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.250 [2024-07-24 20:02:53.701399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.250 [2024-07-24 20:02:53.701405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.250 [2024-07-24 20:02:53.701421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.250 qpair failed and we were unable to recover it. 00:27:02.250 [2024-07-24 20:02:53.711319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.250 [2024-07-24 20:02:53.711454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.250 [2024-07-24 20:02:53.711471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.250 [2024-07-24 20:02:53.711478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.250 [2024-07-24 20:02:53.711486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.250 [2024-07-24 20:02:53.711504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.250 qpair failed and we were unable to recover it. 00:27:02.250 [2024-07-24 20:02:53.721280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.250 [2024-07-24 20:02:53.721417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.250 [2024-07-24 20:02:53.721434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.250 [2024-07-24 20:02:53.721440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.250 [2024-07-24 20:02:53.721446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.250 [2024-07-24 20:02:53.721463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.250 qpair failed and we were unable to recover it. 00:27:02.250 [2024-07-24 20:02:53.731377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.250 [2024-07-24 20:02:53.731516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.250 [2024-07-24 20:02:53.731533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.250 [2024-07-24 20:02:53.731539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.250 [2024-07-24 20:02:53.731545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.250 [2024-07-24 20:02:53.731563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.250 qpair failed and we were unable to recover it. 00:27:02.250 [2024-07-24 20:02:53.741328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.250 [2024-07-24 20:02:53.741465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.250 [2024-07-24 20:02:53.741481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.250 [2024-07-24 20:02:53.741488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.250 [2024-07-24 20:02:53.741495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.250 [2024-07-24 20:02:53.741512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.250 qpair failed and we were unable to recover it. 00:27:02.250 [2024-07-24 20:02:53.751424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.250 [2024-07-24 20:02:53.751563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.250 [2024-07-24 20:02:53.751579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.250 [2024-07-24 20:02:53.751586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.250 [2024-07-24 20:02:53.751592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.250 [2024-07-24 20:02:53.751608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.250 qpair failed and we were unable to recover it. 00:27:02.251 [2024-07-24 20:02:53.761383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.251 [2024-07-24 20:02:53.761531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.251 [2024-07-24 20:02:53.761548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.251 [2024-07-24 20:02:53.761555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.251 [2024-07-24 20:02:53.761561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.251 [2024-07-24 20:02:53.761577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.251 qpair failed and we were unable to recover it. 00:27:02.251 [2024-07-24 20:02:53.771418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.251 [2024-07-24 20:02:53.771558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.251 [2024-07-24 20:02:53.771575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.251 [2024-07-24 20:02:53.771582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.251 [2024-07-24 20:02:53.771587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.251 [2024-07-24 20:02:53.771604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.251 qpair failed and we were unable to recover it. 00:27:02.251 [2024-07-24 20:02:53.781554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.251 [2024-07-24 20:02:53.781691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.251 [2024-07-24 20:02:53.781708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.251 [2024-07-24 20:02:53.781714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.251 [2024-07-24 20:02:53.781720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.251 [2024-07-24 20:02:53.781737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.251 qpair failed and we were unable to recover it. 00:27:02.251 [2024-07-24 20:02:53.791469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.251 [2024-07-24 20:02:53.791603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.251 [2024-07-24 20:02:53.791619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.251 [2024-07-24 20:02:53.791626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.251 [2024-07-24 20:02:53.791631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.251 [2024-07-24 20:02:53.791648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.251 qpair failed and we were unable to recover it. 00:27:02.251 [2024-07-24 20:02:53.801608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.251 [2024-07-24 20:02:53.801751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.251 [2024-07-24 20:02:53.801769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.251 [2024-07-24 20:02:53.801780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.251 [2024-07-24 20:02:53.801786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.251 [2024-07-24 20:02:53.801803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.251 qpair failed and we were unable to recover it. 00:27:02.251 [2024-07-24 20:02:53.811585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.251 [2024-07-24 20:02:53.811724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.251 [2024-07-24 20:02:53.811740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.251 [2024-07-24 20:02:53.811747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.251 [2024-07-24 20:02:53.811753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.251 [2024-07-24 20:02:53.811770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.251 qpair failed and we were unable to recover it. 00:27:02.251 [2024-07-24 20:02:53.821630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.251 [2024-07-24 20:02:53.821766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.251 [2024-07-24 20:02:53.821783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.251 [2024-07-24 20:02:53.821789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.251 [2024-07-24 20:02:53.821795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.251 [2024-07-24 20:02:53.821811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.251 qpair failed and we were unable to recover it. 00:27:02.251 [2024-07-24 20:02:53.831587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.251 [2024-07-24 20:02:53.831722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.251 [2024-07-24 20:02:53.831738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.251 [2024-07-24 20:02:53.831745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.251 [2024-07-24 20:02:53.831751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.251 [2024-07-24 20:02:53.831768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.251 qpair failed and we were unable to recover it. 00:27:02.251 [2024-07-24 20:02:53.841623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.251 [2024-07-24 20:02:53.841753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.251 [2024-07-24 20:02:53.841770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.251 [2024-07-24 20:02:53.841777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.251 [2024-07-24 20:02:53.841783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.251 [2024-07-24 20:02:53.841799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.251 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-24 20:02:53.851743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.512 [2024-07-24 20:02:53.851885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.512 [2024-07-24 20:02:53.851902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.512 [2024-07-24 20:02:53.851909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.512 [2024-07-24 20:02:53.851915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.512 [2024-07-24 20:02:53.851932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-24 20:02:53.861668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.512 [2024-07-24 20:02:53.861803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.512 [2024-07-24 20:02:53.861820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.512 [2024-07-24 20:02:53.861826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.512 [2024-07-24 20:02:53.861832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.512 [2024-07-24 20:02:53.861849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-24 20:02:53.871780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.512 [2024-07-24 20:02:53.871920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.512 [2024-07-24 20:02:53.871938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.512 [2024-07-24 20:02:53.871945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.512 [2024-07-24 20:02:53.871951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.512 [2024-07-24 20:02:53.871969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-24 20:02:53.881743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.512 [2024-07-24 20:02:53.881882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.512 [2024-07-24 20:02:53.881899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.512 [2024-07-24 20:02:53.881906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.512 [2024-07-24 20:02:53.881912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.512 [2024-07-24 20:02:53.881929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-24 20:02:53.891844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.512 [2024-07-24 20:02:53.891975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.512 [2024-07-24 20:02:53.891995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.513 [2024-07-24 20:02:53.892002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.513 [2024-07-24 20:02:53.892008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.513 [2024-07-24 20:02:53.892024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-24 20:02:53.901845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.513 [2024-07-24 20:02:53.901997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.513 [2024-07-24 20:02:53.902014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.513 [2024-07-24 20:02:53.902021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.513 [2024-07-24 20:02:53.902027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.513 [2024-07-24 20:02:53.902049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-24 20:02:53.911870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.513 [2024-07-24 20:02:53.912007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.513 [2024-07-24 20:02:53.912024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.513 [2024-07-24 20:02:53.912031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.513 [2024-07-24 20:02:53.912037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.513 [2024-07-24 20:02:53.912060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-24 20:02:53.921861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.513 [2024-07-24 20:02:53.922001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.513 [2024-07-24 20:02:53.922018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.513 [2024-07-24 20:02:53.922025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.513 [2024-07-24 20:02:53.922030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.513 [2024-07-24 20:02:53.922053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-24 20:02:53.931943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.513 [2024-07-24 20:02:53.932081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.513 [2024-07-24 20:02:53.932098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.513 [2024-07-24 20:02:53.932106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.513 [2024-07-24 20:02:53.932111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.513 [2024-07-24 20:02:53.932131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-24 20:02:53.941889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.513 [2024-07-24 20:02:53.942041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.513 [2024-07-24 20:02:53.942063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.513 [2024-07-24 20:02:53.942070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.513 [2024-07-24 20:02:53.942076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.513 [2024-07-24 20:02:53.942093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-24 20:02:53.952006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.513 [2024-07-24 20:02:53.952332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.513 [2024-07-24 20:02:53.952350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.513 [2024-07-24 20:02:53.952357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.513 [2024-07-24 20:02:53.952363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.513 [2024-07-24 20:02:53.952380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-24 20:02:53.961959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.513 [2024-07-24 20:02:53.962110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.513 [2024-07-24 20:02:53.962127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.513 [2024-07-24 20:02:53.962134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.513 [2024-07-24 20:02:53.962139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.513 [2024-07-24 20:02:53.962156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-24 20:02:53.971994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.513 [2024-07-24 20:02:53.972169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.513 [2024-07-24 20:02:53.972186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.513 [2024-07-24 20:02:53.972193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.513 [2024-07-24 20:02:53.972199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.513 [2024-07-24 20:02:53.972216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-24 20:02:53.982011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.513 [2024-07-24 20:02:53.982151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.513 [2024-07-24 20:02:53.982172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.513 [2024-07-24 20:02:53.982178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.513 [2024-07-24 20:02:53.982184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.513 [2024-07-24 20:02:53.982201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-24 20:02:53.992099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.513 [2024-07-24 20:02:53.992237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.513 [2024-07-24 20:02:53.992253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.513 [2024-07-24 20:02:53.992260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.513 [2024-07-24 20:02:53.992266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.513 [2024-07-24 20:02:53.992283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-24 20:02:54.002161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.513 [2024-07-24 20:02:54.002291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.513 [2024-07-24 20:02:54.002308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.513 [2024-07-24 20:02:54.002315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.513 [2024-07-24 20:02:54.002321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.513 [2024-07-24 20:02:54.002338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-24 20:02:54.012123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.513 [2024-07-24 20:02:54.012264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.513 [2024-07-24 20:02:54.012281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.513 [2024-07-24 20:02:54.012288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.513 [2024-07-24 20:02:54.012294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.513 [2024-07-24 20:02:54.012311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-24 20:02:54.022141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.514 [2024-07-24 20:02:54.022272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.514 [2024-07-24 20:02:54.022289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.514 [2024-07-24 20:02:54.022296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.514 [2024-07-24 20:02:54.022302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.514 [2024-07-24 20:02:54.022322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-24 20:02:54.032225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.514 [2024-07-24 20:02:54.032544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.514 [2024-07-24 20:02:54.032561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.514 [2024-07-24 20:02:54.032567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.514 [2024-07-24 20:02:54.032573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.514 [2024-07-24 20:02:54.032590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-24 20:02:54.042206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.514 [2024-07-24 20:02:54.042347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.514 [2024-07-24 20:02:54.042364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.514 [2024-07-24 20:02:54.042371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.514 [2024-07-24 20:02:54.042377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.514 [2024-07-24 20:02:54.042394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-24 20:02:54.052314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.514 [2024-07-24 20:02:54.052451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.514 [2024-07-24 20:02:54.052467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.514 [2024-07-24 20:02:54.052474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.514 [2024-07-24 20:02:54.052480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.514 [2024-07-24 20:02:54.052497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-24 20:02:54.062530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.514 [2024-07-24 20:02:54.062665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.514 [2024-07-24 20:02:54.062682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.514 [2024-07-24 20:02:54.062689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.514 [2024-07-24 20:02:54.062695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.514 [2024-07-24 20:02:54.062712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-24 20:02:54.072299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.514 [2024-07-24 20:02:54.072434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.514 [2024-07-24 20:02:54.072451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.514 [2024-07-24 20:02:54.072458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.514 [2024-07-24 20:02:54.072464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.514 [2024-07-24 20:02:54.072481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-24 20:02:54.082376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.514 [2024-07-24 20:02:54.082513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.514 [2024-07-24 20:02:54.082530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.514 [2024-07-24 20:02:54.082537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.514 [2024-07-24 20:02:54.082542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.514 [2024-07-24 20:02:54.082559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-24 20:02:54.092434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.514 [2024-07-24 20:02:54.092566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.514 [2024-07-24 20:02:54.092582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.514 [2024-07-24 20:02:54.092589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.514 [2024-07-24 20:02:54.092595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.514 [2024-07-24 20:02:54.092611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-24 20:02:54.102370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.514 [2024-07-24 20:02:54.102516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.514 [2024-07-24 20:02:54.102533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.514 [2024-07-24 20:02:54.102539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.514 [2024-07-24 20:02:54.102545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.514 [2024-07-24 20:02:54.102562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.775 [2024-07-24 20:02:54.112439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.775 [2024-07-24 20:02:54.112583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.775 [2024-07-24 20:02:54.112600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.775 [2024-07-24 20:02:54.112607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.775 [2024-07-24 20:02:54.112616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.775 [2024-07-24 20:02:54.112633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.775 qpair failed and we were unable to recover it. 00:27:02.775 [2024-07-24 20:02:54.122509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.775 [2024-07-24 20:02:54.122646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.775 [2024-07-24 20:02:54.122663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.775 [2024-07-24 20:02:54.122670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.775 [2024-07-24 20:02:54.122675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.775 [2024-07-24 20:02:54.122692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.775 qpair failed and we were unable to recover it. 00:27:02.775 [2024-07-24 20:02:54.132474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.775 [2024-07-24 20:02:54.132612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.775 [2024-07-24 20:02:54.132628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.775 [2024-07-24 20:02:54.132635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.775 [2024-07-24 20:02:54.132641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.775 [2024-07-24 20:02:54.132658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.775 qpair failed and we were unable to recover it. 00:27:02.775 [2024-07-24 20:02:54.142490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.775 [2024-07-24 20:02:54.142638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.775 [2024-07-24 20:02:54.142655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.775 [2024-07-24 20:02:54.142661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.775 [2024-07-24 20:02:54.142667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.776 [2024-07-24 20:02:54.142684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.776 qpair failed and we were unable to recover it. 00:27:02.776 [2024-07-24 20:02:54.152594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.776 [2024-07-24 20:02:54.152731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.776 [2024-07-24 20:02:54.152748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.776 [2024-07-24 20:02:54.152755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.776 [2024-07-24 20:02:54.152760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.776 [2024-07-24 20:02:54.152777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.776 qpair failed and we were unable to recover it. 00:27:02.776 [2024-07-24 20:02:54.162542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.776 [2024-07-24 20:02:54.162721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.776 [2024-07-24 20:02:54.162737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.776 [2024-07-24 20:02:54.162744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.776 [2024-07-24 20:02:54.162750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.776 [2024-07-24 20:02:54.162767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.776 qpair failed and we were unable to recover it. 00:27:02.776 [2024-07-24 20:02:54.172573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.776 [2024-07-24 20:02:54.172717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.776 [2024-07-24 20:02:54.172733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.776 [2024-07-24 20:02:54.172740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.776 [2024-07-24 20:02:54.172746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.776 [2024-07-24 20:02:54.172763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.776 qpair failed and we were unable to recover it. 00:27:02.776 [2024-07-24 20:02:54.182653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.776 [2024-07-24 20:02:54.182823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.776 [2024-07-24 20:02:54.182840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.776 [2024-07-24 20:02:54.182847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.776 [2024-07-24 20:02:54.182853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.776 [2024-07-24 20:02:54.182870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.776 qpair failed and we were unable to recover it. 00:27:02.776 [2024-07-24 20:02:54.192680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.776 [2024-07-24 20:02:54.192813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.776 [2024-07-24 20:02:54.192830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.776 [2024-07-24 20:02:54.192837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.776 [2024-07-24 20:02:54.192843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.776 [2024-07-24 20:02:54.192860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.776 qpair failed and we were unable to recover it. 00:27:02.776 [2024-07-24 20:02:54.202651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.776 [2024-07-24 20:02:54.202790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.776 [2024-07-24 20:02:54.202807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.776 [2024-07-24 20:02:54.202817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.776 [2024-07-24 20:02:54.202823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.776 [2024-07-24 20:02:54.202840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.776 qpair failed and we were unable to recover it. 00:27:02.776 [2024-07-24 20:02:54.212765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.776 [2024-07-24 20:02:54.212899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.776 [2024-07-24 20:02:54.212916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.776 [2024-07-24 20:02:54.212923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.776 [2024-07-24 20:02:54.212928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.776 [2024-07-24 20:02:54.212945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.776 qpair failed and we were unable to recover it. 00:27:02.776 [2024-07-24 20:02:54.222704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.776 [2024-07-24 20:02:54.222841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.776 [2024-07-24 20:02:54.222857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.776 [2024-07-24 20:02:54.222864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.776 [2024-07-24 20:02:54.222870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.776 [2024-07-24 20:02:54.222886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.776 qpair failed and we were unable to recover it. 00:27:02.776 [2024-07-24 20:02:54.232819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.776 [2024-07-24 20:02:54.232947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.776 [2024-07-24 20:02:54.232965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.776 [2024-07-24 20:02:54.232971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.776 [2024-07-24 20:02:54.232977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.776 [2024-07-24 20:02:54.232995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.776 qpair failed and we were unable to recover it. 00:27:02.776 [2024-07-24 20:02:54.242841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.776 [2024-07-24 20:02:54.242976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.776 [2024-07-24 20:02:54.242995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.776 [2024-07-24 20:02:54.243001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.776 [2024-07-24 20:02:54.243008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.776 [2024-07-24 20:02:54.243026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.776 qpair failed and we were unable to recover it. 00:27:02.776 [2024-07-24 20:02:54.252880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.776 [2024-07-24 20:02:54.253119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.776 [2024-07-24 20:02:54.253137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.776 [2024-07-24 20:02:54.253145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.776 [2024-07-24 20:02:54.253151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.776 [2024-07-24 20:02:54.253168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.776 qpair failed and we were unable to recover it. 00:27:02.776 [2024-07-24 20:02:54.262904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.776 [2024-07-24 20:02:54.263049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.776 [2024-07-24 20:02:54.263066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.776 [2024-07-24 20:02:54.263072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.776 [2024-07-24 20:02:54.263078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.776 [2024-07-24 20:02:54.263095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.776 qpair failed and we were unable to recover it. 00:27:02.776 [2024-07-24 20:02:54.272937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.776 [2024-07-24 20:02:54.273081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.776 [2024-07-24 20:02:54.273097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.777 [2024-07-24 20:02:54.273104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.777 [2024-07-24 20:02:54.273110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.777 [2024-07-24 20:02:54.273127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.777 qpair failed and we were unable to recover it. 00:27:02.777 [2024-07-24 20:02:54.282949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.777 [2024-07-24 20:02:54.283083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.777 [2024-07-24 20:02:54.283100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.777 [2024-07-24 20:02:54.283107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.777 [2024-07-24 20:02:54.283113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.777 [2024-07-24 20:02:54.283130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.777 qpair failed and we were unable to recover it. 00:27:02.777 [2024-07-24 20:02:54.293002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.777 [2024-07-24 20:02:54.293137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.777 [2024-07-24 20:02:54.293153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.777 [2024-07-24 20:02:54.293166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.777 [2024-07-24 20:02:54.293172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.777 [2024-07-24 20:02:54.293189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.777 qpair failed and we were unable to recover it. 00:27:02.777 [2024-07-24 20:02:54.303019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.777 [2024-07-24 20:02:54.303161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.777 [2024-07-24 20:02:54.303178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.777 [2024-07-24 20:02:54.303184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.777 [2024-07-24 20:02:54.303190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.777 [2024-07-24 20:02:54.303207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.777 qpair failed and we were unable to recover it. 00:27:02.777 [2024-07-24 20:02:54.313065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.777 [2024-07-24 20:02:54.313204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.777 [2024-07-24 20:02:54.313220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.777 [2024-07-24 20:02:54.313227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.777 [2024-07-24 20:02:54.313233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.777 [2024-07-24 20:02:54.313249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.777 qpair failed and we were unable to recover it. 00:27:02.777 [2024-07-24 20:02:54.323083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.777 [2024-07-24 20:02:54.323218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.777 [2024-07-24 20:02:54.323235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.777 [2024-07-24 20:02:54.323242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.777 [2024-07-24 20:02:54.323248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.777 [2024-07-24 20:02:54.323265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.777 qpair failed and we were unable to recover it. 00:27:02.777 [2024-07-24 20:02:54.333113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.777 [2024-07-24 20:02:54.333247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.777 [2024-07-24 20:02:54.333264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.777 [2024-07-24 20:02:54.333270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.777 [2024-07-24 20:02:54.333276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.777 [2024-07-24 20:02:54.333293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.777 qpair failed and we were unable to recover it. 00:27:02.777 [2024-07-24 20:02:54.343146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.777 [2024-07-24 20:02:54.343284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.777 [2024-07-24 20:02:54.343301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.777 [2024-07-24 20:02:54.343308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.777 [2024-07-24 20:02:54.343314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.777 [2024-07-24 20:02:54.343331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.777 qpair failed and we were unable to recover it. 00:27:02.777 [2024-07-24 20:02:54.353196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.777 [2024-07-24 20:02:54.353334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.777 [2024-07-24 20:02:54.353350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.777 [2024-07-24 20:02:54.353357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.777 [2024-07-24 20:02:54.353363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.777 [2024-07-24 20:02:54.353380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.777 qpair failed and we were unable to recover it. 00:27:02.777 [2024-07-24 20:02:54.363203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.777 [2024-07-24 20:02:54.363340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.777 [2024-07-24 20:02:54.363357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.777 [2024-07-24 20:02:54.363364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.777 [2024-07-24 20:02:54.363369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:02.777 [2024-07-24 20:02:54.363386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:02.777 qpair failed and we were unable to recover it. 00:27:03.038 [2024-07-24 20:02:54.373229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.038 [2024-07-24 20:02:54.373367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.038 [2024-07-24 20:02:54.373384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.038 [2024-07-24 20:02:54.373391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.038 [2024-07-24 20:02:54.373397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.038 [2024-07-24 20:02:54.373414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.038 qpair failed and we were unable to recover it. 00:27:03.038 [2024-07-24 20:02:54.383261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.038 [2024-07-24 20:02:54.383404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.038 [2024-07-24 20:02:54.383423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.038 [2024-07-24 20:02:54.383430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.038 [2024-07-24 20:02:54.383436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.038 [2024-07-24 20:02:54.383453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.038 qpair failed and we were unable to recover it. 00:27:03.038 [2024-07-24 20:02:54.393220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.038 [2024-07-24 20:02:54.393359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.038 [2024-07-24 20:02:54.393376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.038 [2024-07-24 20:02:54.393382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.038 [2024-07-24 20:02:54.393388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.038 [2024-07-24 20:02:54.393404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.038 qpair failed and we were unable to recover it. 00:27:03.038 [2024-07-24 20:02:54.403311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.038 [2024-07-24 20:02:54.403444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.038 [2024-07-24 20:02:54.403460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.038 [2024-07-24 20:02:54.403467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.038 [2024-07-24 20:02:54.403472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.038 [2024-07-24 20:02:54.403489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.038 qpair failed and we were unable to recover it. 00:27:03.038 [2024-07-24 20:02:54.413356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.038 [2024-07-24 20:02:54.413493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.038 [2024-07-24 20:02:54.413509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.038 [2024-07-24 20:02:54.413516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.038 [2024-07-24 20:02:54.413521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.038 [2024-07-24 20:02:54.413538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.038 qpair failed and we were unable to recover it. 00:27:03.038 [2024-07-24 20:02:54.423424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.038 [2024-07-24 20:02:54.423558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.038 [2024-07-24 20:02:54.423574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.038 [2024-07-24 20:02:54.423580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.038 [2024-07-24 20:02:54.423586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.038 [2024-07-24 20:02:54.423606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.038 qpair failed and we were unable to recover it. 00:27:03.038 [2024-07-24 20:02:54.433441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.038 [2024-07-24 20:02:54.433572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.038 [2024-07-24 20:02:54.433588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.038 [2024-07-24 20:02:54.433595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.038 [2024-07-24 20:02:54.433601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.038 [2024-07-24 20:02:54.433618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.038 qpair failed and we were unable to recover it. 00:27:03.038 [2024-07-24 20:02:54.443429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.038 [2024-07-24 20:02:54.443562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.038 [2024-07-24 20:02:54.443579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.039 [2024-07-24 20:02:54.443585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.039 [2024-07-24 20:02:54.443591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.039 [2024-07-24 20:02:54.443608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.039 qpair failed and we were unable to recover it. 00:27:03.039 [2024-07-24 20:02:54.453403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.039 [2024-07-24 20:02:54.453541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.039 [2024-07-24 20:02:54.453557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.039 [2024-07-24 20:02:54.453564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.039 [2024-07-24 20:02:54.453570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.039 [2024-07-24 20:02:54.453586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.039 qpair failed and we were unable to recover it. 00:27:03.039 [2024-07-24 20:02:54.463469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.039 [2024-07-24 20:02:54.463605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.039 [2024-07-24 20:02:54.463621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.039 [2024-07-24 20:02:54.463628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.039 [2024-07-24 20:02:54.463634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.039 [2024-07-24 20:02:54.463650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.039 qpair failed and we were unable to recover it. 00:27:03.039 [2024-07-24 20:02:54.473511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.039 [2024-07-24 20:02:54.473642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.039 [2024-07-24 20:02:54.473662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.039 [2024-07-24 20:02:54.473669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.039 [2024-07-24 20:02:54.473674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.039 [2024-07-24 20:02:54.473692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.039 qpair failed and we were unable to recover it. 00:27:03.039 [2024-07-24 20:02:54.483551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.039 [2024-07-24 20:02:54.483686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.039 [2024-07-24 20:02:54.483703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.039 [2024-07-24 20:02:54.483709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.039 [2024-07-24 20:02:54.483715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.039 [2024-07-24 20:02:54.483732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.039 qpair failed and we were unable to recover it. 00:27:03.039 [2024-07-24 20:02:54.493580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.039 [2024-07-24 20:02:54.493715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.039 [2024-07-24 20:02:54.493731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.039 [2024-07-24 20:02:54.493738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.039 [2024-07-24 20:02:54.493744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.039 [2024-07-24 20:02:54.493761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.039 qpair failed and we were unable to recover it. 00:27:03.039 [2024-07-24 20:02:54.503585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.039 [2024-07-24 20:02:54.503721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.039 [2024-07-24 20:02:54.503737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.039 [2024-07-24 20:02:54.503744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.039 [2024-07-24 20:02:54.503749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.039 [2024-07-24 20:02:54.503765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.039 qpair failed and we were unable to recover it. 00:27:03.039 [2024-07-24 20:02:54.513640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.039 [2024-07-24 20:02:54.513771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.039 [2024-07-24 20:02:54.513787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.039 [2024-07-24 20:02:54.513794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.039 [2024-07-24 20:02:54.513803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.039 [2024-07-24 20:02:54.513820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.039 qpair failed and we were unable to recover it. 00:27:03.039 [2024-07-24 20:02:54.523644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.039 [2024-07-24 20:02:54.523779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.039 [2024-07-24 20:02:54.523795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.039 [2024-07-24 20:02:54.523802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.039 [2024-07-24 20:02:54.523808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.039 [2024-07-24 20:02:54.523825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.039 qpair failed and we were unable to recover it. 00:27:03.039 [2024-07-24 20:02:54.533714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.039 [2024-07-24 20:02:54.533868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.039 [2024-07-24 20:02:54.533884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.039 [2024-07-24 20:02:54.533891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.039 [2024-07-24 20:02:54.533897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.039 [2024-07-24 20:02:54.533913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.039 qpair failed and we were unable to recover it. 00:27:03.039 [2024-07-24 20:02:54.543743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.039 [2024-07-24 20:02:54.543878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.039 [2024-07-24 20:02:54.543895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.039 [2024-07-24 20:02:54.543901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.039 [2024-07-24 20:02:54.543907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.039 [2024-07-24 20:02:54.543924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.039 qpair failed and we were unable to recover it. 00:27:03.039 [2024-07-24 20:02:54.553746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.039 [2024-07-24 20:02:54.553884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.040 [2024-07-24 20:02:54.553900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.040 [2024-07-24 20:02:54.553907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.040 [2024-07-24 20:02:54.553913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.040 [2024-07-24 20:02:54.553930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.040 qpair failed and we were unable to recover it. 00:27:03.040 [2024-07-24 20:02:54.563771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.040 [2024-07-24 20:02:54.563902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.040 [2024-07-24 20:02:54.563918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.040 [2024-07-24 20:02:54.563925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.040 [2024-07-24 20:02:54.563931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.040 [2024-07-24 20:02:54.563947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.040 qpair failed and we were unable to recover it. 00:27:03.040 [2024-07-24 20:02:54.573806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.040 [2024-07-24 20:02:54.573943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.040 [2024-07-24 20:02:54.573959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.040 [2024-07-24 20:02:54.573966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.040 [2024-07-24 20:02:54.573972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.040 [2024-07-24 20:02:54.573988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.040 qpair failed and we were unable to recover it. 00:27:03.040 [2024-07-24 20:02:54.583819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.040 [2024-07-24 20:02:54.583960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.040 [2024-07-24 20:02:54.583977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.040 [2024-07-24 20:02:54.583984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.040 [2024-07-24 20:02:54.583989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.040 [2024-07-24 20:02:54.584005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.040 qpair failed and we were unable to recover it. 00:27:03.040 [2024-07-24 20:02:54.593854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.040 [2024-07-24 20:02:54.593990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.040 [2024-07-24 20:02:54.594006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.040 [2024-07-24 20:02:54.594012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.040 [2024-07-24 20:02:54.594018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.040 [2024-07-24 20:02:54.594035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.040 qpair failed and we were unable to recover it. 00:27:03.040 [2024-07-24 20:02:54.603886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.040 [2024-07-24 20:02:54.604024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.040 [2024-07-24 20:02:54.604041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.040 [2024-07-24 20:02:54.604058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.040 [2024-07-24 20:02:54.604064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.040 [2024-07-24 20:02:54.604081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.040 qpair failed and we were unable to recover it. 00:27:03.040 [2024-07-24 20:02:54.613927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.040 [2024-07-24 20:02:54.614065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.040 [2024-07-24 20:02:54.614082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.040 [2024-07-24 20:02:54.614089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.040 [2024-07-24 20:02:54.614095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.040 [2024-07-24 20:02:54.614112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.040 qpair failed and we were unable to recover it. 00:27:03.040 [2024-07-24 20:02:54.623869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.040 [2024-07-24 20:02:54.624007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.040 [2024-07-24 20:02:54.624023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.040 [2024-07-24 20:02:54.624030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.040 [2024-07-24 20:02:54.624036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.040 [2024-07-24 20:02:54.624060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.040 qpair failed and we were unable to recover it. 00:27:03.301 [2024-07-24 20:02:54.634069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.301 [2024-07-24 20:02:54.634214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.301 [2024-07-24 20:02:54.634231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.301 [2024-07-24 20:02:54.634238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.301 [2024-07-24 20:02:54.634243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.301 [2024-07-24 20:02:54.634260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.301 qpair failed and we were unable to recover it. 00:27:03.301 [2024-07-24 20:02:54.643995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.301 [2024-07-24 20:02:54.644138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.301 [2024-07-24 20:02:54.644155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.301 [2024-07-24 20:02:54.644162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.301 [2024-07-24 20:02:54.644168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.301 [2024-07-24 20:02:54.644184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.301 qpair failed and we were unable to recover it. 00:27:03.301 [2024-07-24 20:02:54.654092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.301 [2024-07-24 20:02:54.654229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.301 [2024-07-24 20:02:54.654245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.301 [2024-07-24 20:02:54.654252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.301 [2024-07-24 20:02:54.654257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.301 [2024-07-24 20:02:54.654274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.301 qpair failed and we were unable to recover it. 00:27:03.301 [2024-07-24 20:02:54.664068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.301 [2024-07-24 20:02:54.664203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.301 [2024-07-24 20:02:54.664220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.301 [2024-07-24 20:02:54.664226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.301 [2024-07-24 20:02:54.664232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.301 [2024-07-24 20:02:54.664248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.301 qpair failed and we were unable to recover it. 00:27:03.301 [2024-07-24 20:02:54.674078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.301 [2024-07-24 20:02:54.674215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.301 [2024-07-24 20:02:54.674232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.301 [2024-07-24 20:02:54.674238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.301 [2024-07-24 20:02:54.674244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.301 [2024-07-24 20:02:54.674261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.301 qpair failed and we were unable to recover it. 00:27:03.301 [2024-07-24 20:02:54.684099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.301 [2024-07-24 20:02:54.684233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.301 [2024-07-24 20:02:54.684250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.301 [2024-07-24 20:02:54.684257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.301 [2024-07-24 20:02:54.684262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.301 [2024-07-24 20:02:54.684279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.301 qpair failed and we were unable to recover it. 00:27:03.301 [2024-07-24 20:02:54.694144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.301 [2024-07-24 20:02:54.694288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.301 [2024-07-24 20:02:54.694304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.301 [2024-07-24 20:02:54.694315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.301 [2024-07-24 20:02:54.694320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.301 [2024-07-24 20:02:54.694337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.301 qpair failed and we were unable to recover it. 00:27:03.301 [2024-07-24 20:02:54.704176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.302 [2024-07-24 20:02:54.704315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.302 [2024-07-24 20:02:54.704332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.302 [2024-07-24 20:02:54.704338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.302 [2024-07-24 20:02:54.704344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.302 [2024-07-24 20:02:54.704361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.302 qpair failed and we were unable to recover it. 00:27:03.302 [2024-07-24 20:02:54.714202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.302 [2024-07-24 20:02:54.714353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.302 [2024-07-24 20:02:54.714370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.302 [2024-07-24 20:02:54.714377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.302 [2024-07-24 20:02:54.714382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.302 [2024-07-24 20:02:54.714399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.302 qpair failed and we were unable to recover it. 00:27:03.302 [2024-07-24 20:02:54.724225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.302 [2024-07-24 20:02:54.724361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.302 [2024-07-24 20:02:54.724377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.302 [2024-07-24 20:02:54.724385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.302 [2024-07-24 20:02:54.724390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.302 [2024-07-24 20:02:54.724407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.302 qpair failed and we were unable to recover it. 00:27:03.302 [2024-07-24 20:02:54.734290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.302 [2024-07-24 20:02:54.734427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.302 [2024-07-24 20:02:54.734444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.302 [2024-07-24 20:02:54.734451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.302 [2024-07-24 20:02:54.734457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.302 [2024-07-24 20:02:54.734473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.302 qpair failed and we were unable to recover it. 00:27:03.302 [2024-07-24 20:02:54.744291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.302 [2024-07-24 20:02:54.744430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.302 [2024-07-24 20:02:54.744448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.302 [2024-07-24 20:02:54.744455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.302 [2024-07-24 20:02:54.744460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.302 [2024-07-24 20:02:54.744477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.302 qpair failed and we were unable to recover it. 00:27:03.302 [2024-07-24 20:02:54.754431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.302 [2024-07-24 20:02:54.754570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.302 [2024-07-24 20:02:54.754588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.302 [2024-07-24 20:02:54.754595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.302 [2024-07-24 20:02:54.754601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.302 [2024-07-24 20:02:54.754618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.302 qpair failed and we were unable to recover it. 00:27:03.302 [2024-07-24 20:02:54.764349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.302 [2024-07-24 20:02:54.764489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.302 [2024-07-24 20:02:54.764505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.302 [2024-07-24 20:02:54.764512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.302 [2024-07-24 20:02:54.764518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.302 [2024-07-24 20:02:54.764534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.302 qpair failed and we were unable to recover it. 00:27:03.302 [2024-07-24 20:02:54.774353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.302 [2024-07-24 20:02:54.774489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.302 [2024-07-24 20:02:54.774506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.302 [2024-07-24 20:02:54.774512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.302 [2024-07-24 20:02:54.774517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.302 [2024-07-24 20:02:54.774534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.302 qpair failed and we were unable to recover it. 00:27:03.302 [2024-07-24 20:02:54.784314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.302 [2024-07-24 20:02:54.784454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.302 [2024-07-24 20:02:54.784473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.302 [2024-07-24 20:02:54.784479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.302 [2024-07-24 20:02:54.784485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.302 [2024-07-24 20:02:54.784502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.302 qpair failed and we were unable to recover it. 00:27:03.302 [2024-07-24 20:02:54.794432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.302 [2024-07-24 20:02:54.794567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.302 [2024-07-24 20:02:54.794584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.302 [2024-07-24 20:02:54.794591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.302 [2024-07-24 20:02:54.794597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.302 [2024-07-24 20:02:54.794613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.302 qpair failed and we were unable to recover it. 00:27:03.302 [2024-07-24 20:02:54.804447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.302 [2024-07-24 20:02:54.804584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.302 [2024-07-24 20:02:54.804601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.302 [2024-07-24 20:02:54.804607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.302 [2024-07-24 20:02:54.804613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.302 [2024-07-24 20:02:54.804630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.302 qpair failed and we were unable to recover it. 00:27:03.302 [2024-07-24 20:02:54.814491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.302 [2024-07-24 20:02:54.814629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.302 [2024-07-24 20:02:54.814645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.302 [2024-07-24 20:02:54.814652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.302 [2024-07-24 20:02:54.814658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.302 [2024-07-24 20:02:54.814675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.302 qpair failed and we were unable to recover it. 00:27:03.302 [2024-07-24 20:02:54.824517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.302 [2024-07-24 20:02:54.824653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.302 [2024-07-24 20:02:54.824669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.302 [2024-07-24 20:02:54.824676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.302 [2024-07-24 20:02:54.824682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.303 [2024-07-24 20:02:54.824702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.303 qpair failed and we were unable to recover it. 00:27:03.303 [2024-07-24 20:02:54.834565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.303 [2024-07-24 20:02:54.834702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.303 [2024-07-24 20:02:54.834718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.303 [2024-07-24 20:02:54.834725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.303 [2024-07-24 20:02:54.834731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.303 [2024-07-24 20:02:54.834748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.303 qpair failed and we were unable to recover it. 00:27:03.303 [2024-07-24 20:02:54.844580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.303 [2024-07-24 20:02:54.844725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.303 [2024-07-24 20:02:54.844741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.303 [2024-07-24 20:02:54.844748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.303 [2024-07-24 20:02:54.844754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.303 [2024-07-24 20:02:54.844770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.303 qpair failed and we were unable to recover it. 00:27:03.303 [2024-07-24 20:02:54.854580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.303 [2024-07-24 20:02:54.854724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.303 [2024-07-24 20:02:54.854741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.303 [2024-07-24 20:02:54.854747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.303 [2024-07-24 20:02:54.854753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.303 [2024-07-24 20:02:54.854770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.303 qpair failed and we were unable to recover it. 00:27:03.303 [2024-07-24 20:02:54.864604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.303 [2024-07-24 20:02:54.864742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.303 [2024-07-24 20:02:54.864758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.303 [2024-07-24 20:02:54.864765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.303 [2024-07-24 20:02:54.864771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.303 [2024-07-24 20:02:54.864788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.303 qpair failed and we were unable to recover it. 00:27:03.303 [2024-07-24 20:02:54.874680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.303 [2024-07-24 20:02:54.874814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.303 [2024-07-24 20:02:54.874834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.303 [2024-07-24 20:02:54.874841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.303 [2024-07-24 20:02:54.874847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.303 [2024-07-24 20:02:54.874863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.303 qpair failed and we were unable to recover it. 00:27:03.303 [2024-07-24 20:02:54.884678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.303 [2024-07-24 20:02:54.884809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.303 [2024-07-24 20:02:54.884826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.303 [2024-07-24 20:02:54.884833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.303 [2024-07-24 20:02:54.884839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.303 [2024-07-24 20:02:54.884856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.303 qpair failed and we were unable to recover it. 00:27:03.303 [2024-07-24 20:02:54.894714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.303 [2024-07-24 20:02:54.894863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.303 [2024-07-24 20:02:54.894880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.303 [2024-07-24 20:02:54.894887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.303 [2024-07-24 20:02:54.894892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.303 [2024-07-24 20:02:54.894909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.303 qpair failed and we were unable to recover it. 00:27:03.563 [2024-07-24 20:02:54.904741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.563 [2024-07-24 20:02:54.904888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.563 [2024-07-24 20:02:54.904905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.563 [2024-07-24 20:02:54.904912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.563 [2024-07-24 20:02:54.904918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.563 [2024-07-24 20:02:54.904936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.563 qpair failed and we were unable to recover it. 00:27:03.563 [2024-07-24 20:02:54.914752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.563 [2024-07-24 20:02:54.914889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.563 [2024-07-24 20:02:54.914906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.563 [2024-07-24 20:02:54.914913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.563 [2024-07-24 20:02:54.914923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.563 [2024-07-24 20:02:54.914939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.563 qpair failed and we were unable to recover it. 00:27:03.563 [2024-07-24 20:02:54.924807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.563 [2024-07-24 20:02:54.924940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.563 [2024-07-24 20:02:54.924957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.563 [2024-07-24 20:02:54.924964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.563 [2024-07-24 20:02:54.924969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.563 [2024-07-24 20:02:54.924986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.563 qpair failed and we were unable to recover it. 00:27:03.563 [2024-07-24 20:02:54.934828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.564 [2024-07-24 20:02:54.934964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.564 [2024-07-24 20:02:54.934981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.564 [2024-07-24 20:02:54.934988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.564 [2024-07-24 20:02:54.934994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.564 [2024-07-24 20:02:54.935011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.564 qpair failed and we were unable to recover it. 00:27:03.564 [2024-07-24 20:02:54.944867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.564 [2024-07-24 20:02:54.945000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.564 [2024-07-24 20:02:54.945017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.564 [2024-07-24 20:02:54.945023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.564 [2024-07-24 20:02:54.945029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.564 [2024-07-24 20:02:54.945051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.564 qpair failed and we were unable to recover it. 00:27:03.564 [2024-07-24 20:02:54.954931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.564 [2024-07-24 20:02:54.955070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.564 [2024-07-24 20:02:54.955086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.564 [2024-07-24 20:02:54.955093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.564 [2024-07-24 20:02:54.955099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.564 [2024-07-24 20:02:54.955116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.564 qpair failed and we were unable to recover it. 00:27:03.564 [2024-07-24 20:02:54.964927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.564 [2024-07-24 20:02:54.965070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.564 [2024-07-24 20:02:54.965087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.564 [2024-07-24 20:02:54.965094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.564 [2024-07-24 20:02:54.965100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.564 [2024-07-24 20:02:54.965117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.564 qpair failed and we were unable to recover it. 00:27:03.564 [2024-07-24 20:02:54.974967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.564 [2024-07-24 20:02:54.975109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.564 [2024-07-24 20:02:54.975126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.564 [2024-07-24 20:02:54.975133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.564 [2024-07-24 20:02:54.975139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.564 [2024-07-24 20:02:54.975156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.564 qpair failed and we were unable to recover it. 00:27:03.564 [2024-07-24 20:02:54.984987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.564 [2024-07-24 20:02:54.985123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.564 [2024-07-24 20:02:54.985140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.564 [2024-07-24 20:02:54.985147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.564 [2024-07-24 20:02:54.985152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.564 [2024-07-24 20:02:54.985169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.564 qpair failed and we were unable to recover it. 00:27:03.564 [2024-07-24 20:02:54.994988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.564 [2024-07-24 20:02:54.995128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.564 [2024-07-24 20:02:54.995145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.564 [2024-07-24 20:02:54.995152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.564 [2024-07-24 20:02:54.995158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.564 [2024-07-24 20:02:54.995175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.564 qpair failed and we were unable to recover it. 00:27:03.564 [2024-07-24 20:02:55.005069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.564 [2024-07-24 20:02:55.005199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.564 [2024-07-24 20:02:55.005216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.564 [2024-07-24 20:02:55.005223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.564 [2024-07-24 20:02:55.005232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.564 [2024-07-24 20:02:55.005249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.564 qpair failed and we were unable to recover it. 00:27:03.564 [2024-07-24 20:02:55.015093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.564 [2024-07-24 20:02:55.015228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.564 [2024-07-24 20:02:55.015245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.564 [2024-07-24 20:02:55.015251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.564 [2024-07-24 20:02:55.015257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.564 [2024-07-24 20:02:55.015274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.564 qpair failed and we were unable to recover it. 00:27:03.564 [2024-07-24 20:02:55.025121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.564 [2024-07-24 20:02:55.025263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.564 [2024-07-24 20:02:55.025279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.564 [2024-07-24 20:02:55.025287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.564 [2024-07-24 20:02:55.025292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.564 [2024-07-24 20:02:55.025308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.564 qpair failed and we were unable to recover it. 00:27:03.564 [2024-07-24 20:02:55.035146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.564 [2024-07-24 20:02:55.035284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.564 [2024-07-24 20:02:55.035301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.564 [2024-07-24 20:02:55.035307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.564 [2024-07-24 20:02:55.035313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.564 [2024-07-24 20:02:55.035329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.564 qpair failed and we were unable to recover it. 00:27:03.564 [2024-07-24 20:02:55.045179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.564 [2024-07-24 20:02:55.045317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.564 [2024-07-24 20:02:55.045334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.564 [2024-07-24 20:02:55.045341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.564 [2024-07-24 20:02:55.045346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.564 [2024-07-24 20:02:55.045363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.564 qpair failed and we were unable to recover it. 00:27:03.564 [2024-07-24 20:02:55.055211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.564 [2024-07-24 20:02:55.055347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.564 [2024-07-24 20:02:55.055364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.564 [2024-07-24 20:02:55.055370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.564 [2024-07-24 20:02:55.055375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.564 [2024-07-24 20:02:55.055392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.564 qpair failed and we were unable to recover it. 00:27:03.565 [2024-07-24 20:02:55.065234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.565 [2024-07-24 20:02:55.065369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.565 [2024-07-24 20:02:55.065386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.565 [2024-07-24 20:02:55.065393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.565 [2024-07-24 20:02:55.065399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.565 [2024-07-24 20:02:55.065415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.565 qpair failed and we were unable to recover it. 00:27:03.565 [2024-07-24 20:02:55.075265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.565 [2024-07-24 20:02:55.075398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.565 [2024-07-24 20:02:55.075415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.565 [2024-07-24 20:02:55.075422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.565 [2024-07-24 20:02:55.075428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.565 [2024-07-24 20:02:55.075445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.565 qpair failed and we were unable to recover it. 00:27:03.565 [2024-07-24 20:02:55.085291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.565 [2024-07-24 20:02:55.085427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.565 [2024-07-24 20:02:55.085444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.565 [2024-07-24 20:02:55.085450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.565 [2024-07-24 20:02:55.085456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.565 [2024-07-24 20:02:55.085473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.565 qpair failed and we were unable to recover it. 00:27:03.565 [2024-07-24 20:02:55.095322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.565 [2024-07-24 20:02:55.095452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.565 [2024-07-24 20:02:55.095469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.565 [2024-07-24 20:02:55.095478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.565 [2024-07-24 20:02:55.095484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.565 [2024-07-24 20:02:55.095501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.565 qpair failed and we were unable to recover it. 00:27:03.565 [2024-07-24 20:02:55.105352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.565 [2024-07-24 20:02:55.105492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.565 [2024-07-24 20:02:55.105509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.565 [2024-07-24 20:02:55.105515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.565 [2024-07-24 20:02:55.105521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.565 [2024-07-24 20:02:55.105538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.565 qpair failed and we were unable to recover it. 00:27:03.565 [2024-07-24 20:02:55.115391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.565 [2024-07-24 20:02:55.115529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.565 [2024-07-24 20:02:55.115546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.565 [2024-07-24 20:02:55.115552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.565 [2024-07-24 20:02:55.115558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.565 [2024-07-24 20:02:55.115575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.565 qpair failed and we were unable to recover it. 00:27:03.565 [2024-07-24 20:02:55.125418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.565 [2024-07-24 20:02:55.125550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.565 [2024-07-24 20:02:55.125567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.565 [2024-07-24 20:02:55.125574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.565 [2024-07-24 20:02:55.125579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.565 [2024-07-24 20:02:55.125596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.565 qpair failed and we were unable to recover it. 00:27:03.565 [2024-07-24 20:02:55.135446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.565 [2024-07-24 20:02:55.135582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.565 [2024-07-24 20:02:55.135598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.565 [2024-07-24 20:02:55.135605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.565 [2024-07-24 20:02:55.135611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.565 [2024-07-24 20:02:55.135628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.565 qpair failed and we were unable to recover it. 00:27:03.565 [2024-07-24 20:02:55.145466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.565 [2024-07-24 20:02:55.145597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.565 [2024-07-24 20:02:55.145614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.565 [2024-07-24 20:02:55.145621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.565 [2024-07-24 20:02:55.145627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.565 [2024-07-24 20:02:55.145643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.565 qpair failed and we were unable to recover it. 00:27:03.565 [2024-07-24 20:02:55.155493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.565 [2024-07-24 20:02:55.155634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.565 [2024-07-24 20:02:55.155650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.565 [2024-07-24 20:02:55.155657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.565 [2024-07-24 20:02:55.155663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.565 [2024-07-24 20:02:55.155679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.565 qpair failed and we were unable to recover it. 00:27:03.826 [2024-07-24 20:02:55.165527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.826 [2024-07-24 20:02:55.165684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.826 [2024-07-24 20:02:55.165701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.826 [2024-07-24 20:02:55.165708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.826 [2024-07-24 20:02:55.165713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.826 [2024-07-24 20:02:55.165730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.826 qpair failed and we were unable to recover it. 00:27:03.826 [2024-07-24 20:02:55.175479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.826 [2024-07-24 20:02:55.175617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.826 [2024-07-24 20:02:55.175634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.826 [2024-07-24 20:02:55.175640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.826 [2024-07-24 20:02:55.175646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.826 [2024-07-24 20:02:55.175663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.826 qpair failed and we were unable to recover it. 00:27:03.826 [2024-07-24 20:02:55.185579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.826 [2024-07-24 20:02:55.185731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.826 [2024-07-24 20:02:55.185751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.826 [2024-07-24 20:02:55.185758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.826 [2024-07-24 20:02:55.185763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.826 [2024-07-24 20:02:55.185780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.826 qpair failed and we were unable to recover it. 00:27:03.826 [2024-07-24 20:02:55.195618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.826 [2024-07-24 20:02:55.195749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.826 [2024-07-24 20:02:55.195766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.826 [2024-07-24 20:02:55.195773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.826 [2024-07-24 20:02:55.195779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.826 [2024-07-24 20:02:55.195795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.826 qpair failed and we were unable to recover it. 00:27:03.826 [2024-07-24 20:02:55.205628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.826 [2024-07-24 20:02:55.205760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.826 [2024-07-24 20:02:55.205776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.826 [2024-07-24 20:02:55.205783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.826 [2024-07-24 20:02:55.205789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.826 [2024-07-24 20:02:55.205805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.826 qpair failed and we were unable to recover it. 00:27:03.826 [2024-07-24 20:02:55.215724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.826 [2024-07-24 20:02:55.215864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.826 [2024-07-24 20:02:55.215881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.826 [2024-07-24 20:02:55.215888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.826 [2024-07-24 20:02:55.215894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.826 [2024-07-24 20:02:55.215910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.826 qpair failed and we were unable to recover it. 00:27:03.826 [2024-07-24 20:02:55.225694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.826 [2024-07-24 20:02:55.225826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.826 [2024-07-24 20:02:55.225843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.826 [2024-07-24 20:02:55.225850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.826 [2024-07-24 20:02:55.225855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.826 [2024-07-24 20:02:55.225875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.826 qpair failed and we were unable to recover it. 00:27:03.826 [2024-07-24 20:02:55.235749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.826 [2024-07-24 20:02:55.235886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.826 [2024-07-24 20:02:55.235905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.826 [2024-07-24 20:02:55.235913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.826 [2024-07-24 20:02:55.235919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.826 [2024-07-24 20:02:55.235936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.826 qpair failed and we were unable to recover it. 00:27:03.826 [2024-07-24 20:02:55.245749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.827 [2024-07-24 20:02:55.245881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.827 [2024-07-24 20:02:55.245900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.827 [2024-07-24 20:02:55.245908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.827 [2024-07-24 20:02:55.245914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.827 [2024-07-24 20:02:55.245932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.827 qpair failed and we were unable to recover it. 00:27:03.827 [2024-07-24 20:02:55.255694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.827 [2024-07-24 20:02:55.255839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.827 [2024-07-24 20:02:55.255857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.827 [2024-07-24 20:02:55.255865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.827 [2024-07-24 20:02:55.255871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.827 [2024-07-24 20:02:55.255888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.827 qpair failed and we were unable to recover it. 00:27:03.827 [2024-07-24 20:02:55.265727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.827 [2024-07-24 20:02:55.265862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.827 [2024-07-24 20:02:55.265880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.827 [2024-07-24 20:02:55.265888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.827 [2024-07-24 20:02:55.265894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.827 [2024-07-24 20:02:55.265911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.827 qpair failed and we were unable to recover it. 00:27:03.827 [2024-07-24 20:02:55.275767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.827 [2024-07-24 20:02:55.275943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.827 [2024-07-24 20:02:55.275963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.827 [2024-07-24 20:02:55.275972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.827 [2024-07-24 20:02:55.275978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.827 [2024-07-24 20:02:55.275995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.827 qpair failed and we were unable to recover it. 00:27:03.827 [2024-07-24 20:02:55.285806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.827 [2024-07-24 20:02:55.285937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.827 [2024-07-24 20:02:55.285954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.827 [2024-07-24 20:02:55.285961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.827 [2024-07-24 20:02:55.285967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.827 [2024-07-24 20:02:55.285984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.827 qpair failed and we were unable to recover it. 00:27:03.827 [2024-07-24 20:02:55.295872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.827 [2024-07-24 20:02:55.296017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.827 [2024-07-24 20:02:55.296033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.827 [2024-07-24 20:02:55.296040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.827 [2024-07-24 20:02:55.296052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.827 [2024-07-24 20:02:55.296069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.827 qpair failed and we were unable to recover it. 00:27:03.827 [2024-07-24 20:02:55.305900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.827 [2024-07-24 20:02:55.306029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.827 [2024-07-24 20:02:55.306051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.827 [2024-07-24 20:02:55.306058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.827 [2024-07-24 20:02:55.306063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.827 [2024-07-24 20:02:55.306080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.827 qpair failed and we were unable to recover it. 00:27:03.827 [2024-07-24 20:02:55.315913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.827 [2024-07-24 20:02:55.316050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.827 [2024-07-24 20:02:55.316067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.827 [2024-07-24 20:02:55.316074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.827 [2024-07-24 20:02:55.316082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.827 [2024-07-24 20:02:55.316099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.827 qpair failed and we were unable to recover it. 00:27:03.827 [2024-07-24 20:02:55.325973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.827 [2024-07-24 20:02:55.326112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.827 [2024-07-24 20:02:55.326129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.827 [2024-07-24 20:02:55.326136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.827 [2024-07-24 20:02:55.326142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.827 [2024-07-24 20:02:55.326159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.827 qpair failed and we were unable to recover it. 00:27:03.827 [2024-07-24 20:02:55.335999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.827 [2024-07-24 20:02:55.336139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.827 [2024-07-24 20:02:55.336161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.827 [2024-07-24 20:02:55.336168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.827 [2024-07-24 20:02:55.336174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.827 [2024-07-24 20:02:55.336191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.827 qpair failed and we were unable to recover it. 00:27:03.827 [2024-07-24 20:02:55.346021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.827 [2024-07-24 20:02:55.346161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.827 [2024-07-24 20:02:55.346178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.827 [2024-07-24 20:02:55.346185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.827 [2024-07-24 20:02:55.346190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.827 [2024-07-24 20:02:55.346207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.827 qpair failed and we were unable to recover it. 00:27:03.827 [2024-07-24 20:02:55.356066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.827 [2024-07-24 20:02:55.356201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.827 [2024-07-24 20:02:55.356217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.827 [2024-07-24 20:02:55.356224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.827 [2024-07-24 20:02:55.356230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.827 [2024-07-24 20:02:55.356247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.827 qpair failed and we were unable to recover it. 00:27:03.827 [2024-07-24 20:02:55.366052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.827 [2024-07-24 20:02:55.366187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.827 [2024-07-24 20:02:55.366204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.827 [2024-07-24 20:02:55.366211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.827 [2024-07-24 20:02:55.366217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.827 [2024-07-24 20:02:55.366233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.827 qpair failed and we were unable to recover it. 00:27:03.828 [2024-07-24 20:02:55.376115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.828 [2024-07-24 20:02:55.376247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.828 [2024-07-24 20:02:55.376264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.828 [2024-07-24 20:02:55.376271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.828 [2024-07-24 20:02:55.376276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.828 [2024-07-24 20:02:55.376293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.828 qpair failed and we were unable to recover it. 00:27:03.828 [2024-07-24 20:02:55.386131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.828 [2024-07-24 20:02:55.386284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.828 [2024-07-24 20:02:55.386300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.828 [2024-07-24 20:02:55.386307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.828 [2024-07-24 20:02:55.386312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.828 [2024-07-24 20:02:55.386330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.828 qpair failed and we were unable to recover it. 00:27:03.828 [2024-07-24 20:02:55.396195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.828 [2024-07-24 20:02:55.396509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.828 [2024-07-24 20:02:55.396526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.828 [2024-07-24 20:02:55.396532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.828 [2024-07-24 20:02:55.396538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.828 [2024-07-24 20:02:55.396555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.828 qpair failed and we were unable to recover it. 00:27:03.828 [2024-07-24 20:02:55.406189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.828 [2024-07-24 20:02:55.406326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.828 [2024-07-24 20:02:55.406342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.828 [2024-07-24 20:02:55.406349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.828 [2024-07-24 20:02:55.406358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.828 [2024-07-24 20:02:55.406375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.828 qpair failed and we were unable to recover it. 00:27:03.828 [2024-07-24 20:02:55.416204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.828 [2024-07-24 20:02:55.416361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.828 [2024-07-24 20:02:55.416378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.828 [2024-07-24 20:02:55.416385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.828 [2024-07-24 20:02:55.416391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:03.828 [2024-07-24 20:02:55.416407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.828 qpair failed and we were unable to recover it. 00:27:04.087 [2024-07-24 20:02:55.426309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.087 [2024-07-24 20:02:55.426455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.087 [2024-07-24 20:02:55.426472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.087 [2024-07-24 20:02:55.426479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.087 [2024-07-24 20:02:55.426486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.087 [2024-07-24 20:02:55.426503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.087 qpair failed and we were unable to recover it. 00:27:04.087 [2024-07-24 20:02:55.436231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.087 [2024-07-24 20:02:55.436370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.087 [2024-07-24 20:02:55.436387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.087 [2024-07-24 20:02:55.436394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.087 [2024-07-24 20:02:55.436399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.087 [2024-07-24 20:02:55.436415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.087 qpair failed and we were unable to recover it. 00:27:04.087 [2024-07-24 20:02:55.446339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.087 [2024-07-24 20:02:55.446475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.087 [2024-07-24 20:02:55.446492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.087 [2024-07-24 20:02:55.446499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.087 [2024-07-24 20:02:55.446505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.087 [2024-07-24 20:02:55.446521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.087 qpair failed and we were unable to recover it. 00:27:04.087 [2024-07-24 20:02:55.456303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.087 [2024-07-24 20:02:55.456437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.087 [2024-07-24 20:02:55.456454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.087 [2024-07-24 20:02:55.456461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.087 [2024-07-24 20:02:55.456467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.087 [2024-07-24 20:02:55.456483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.087 qpair failed and we were unable to recover it. 00:27:04.087 [2024-07-24 20:02:55.466414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.087 [2024-07-24 20:02:55.466556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.087 [2024-07-24 20:02:55.466573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.087 [2024-07-24 20:02:55.466580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.088 [2024-07-24 20:02:55.466585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.088 [2024-07-24 20:02:55.466602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.088 qpair failed and we were unable to recover it. 00:27:04.088 [2024-07-24 20:02:55.476451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.088 [2024-07-24 20:02:55.476584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.088 [2024-07-24 20:02:55.476601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.088 [2024-07-24 20:02:55.476608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.088 [2024-07-24 20:02:55.476614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.088 [2024-07-24 20:02:55.476631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.088 qpair failed and we were unable to recover it. 00:27:04.088 [2024-07-24 20:02:55.486502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.088 [2024-07-24 20:02:55.486637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.088 [2024-07-24 20:02:55.486654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.088 [2024-07-24 20:02:55.486660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.088 [2024-07-24 20:02:55.486666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.088 [2024-07-24 20:02:55.486683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.088 qpair failed and we were unable to recover it. 00:27:04.088 [2024-07-24 20:02:55.496469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.088 [2024-07-24 20:02:55.496606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.088 [2024-07-24 20:02:55.496623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.088 [2024-07-24 20:02:55.496633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.088 [2024-07-24 20:02:55.496639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.088 [2024-07-24 20:02:55.496656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.088 qpair failed and we were unable to recover it. 00:27:04.088 [2024-07-24 20:02:55.506418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.088 [2024-07-24 20:02:55.506548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.088 [2024-07-24 20:02:55.506564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.088 [2024-07-24 20:02:55.506571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.088 [2024-07-24 20:02:55.506577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.088 [2024-07-24 20:02:55.506594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.088 qpair failed and we were unable to recover it. 00:27:04.088 [2024-07-24 20:02:55.516455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.088 [2024-07-24 20:02:55.516633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.088 [2024-07-24 20:02:55.516649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.088 [2024-07-24 20:02:55.516657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.088 [2024-07-24 20:02:55.516663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.088 [2024-07-24 20:02:55.516679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.088 qpair failed and we were unable to recover it. 00:27:04.088 [2024-07-24 20:02:55.526572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.088 [2024-07-24 20:02:55.526707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.088 [2024-07-24 20:02:55.526724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.088 [2024-07-24 20:02:55.526731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.088 [2024-07-24 20:02:55.526737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.088 [2024-07-24 20:02:55.526754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.088 qpair failed and we were unable to recover it. 00:27:04.088 [2024-07-24 20:02:55.536590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.088 [2024-07-24 20:02:55.536731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.088 [2024-07-24 20:02:55.536747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.088 [2024-07-24 20:02:55.536754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.088 [2024-07-24 20:02:55.536759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.088 [2024-07-24 20:02:55.536776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.088 qpair failed and we were unable to recover it. 00:27:04.088 [2024-07-24 20:02:55.546582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.088 [2024-07-24 20:02:55.546731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.088 [2024-07-24 20:02:55.546748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.088 [2024-07-24 20:02:55.546755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.088 [2024-07-24 20:02:55.546761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.088 [2024-07-24 20:02:55.546778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.088 qpair failed and we were unable to recover it. 00:27:04.088 [2024-07-24 20:02:55.556594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.088 [2024-07-24 20:02:55.556735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.088 [2024-07-24 20:02:55.556752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.088 [2024-07-24 20:02:55.556759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.088 [2024-07-24 20:02:55.556764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.088 [2024-07-24 20:02:55.556781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.088 qpair failed and we were unable to recover it. 00:27:04.088 [2024-07-24 20:02:55.566574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.088 [2024-07-24 20:02:55.566706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.088 [2024-07-24 20:02:55.566723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.088 [2024-07-24 20:02:55.566729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.088 [2024-07-24 20:02:55.566735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.088 [2024-07-24 20:02:55.566752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.088 qpair failed and we were unable to recover it. 00:27:04.088 [2024-07-24 20:02:55.576624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.088 [2024-07-24 20:02:55.576761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.088 [2024-07-24 20:02:55.576778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.088 [2024-07-24 20:02:55.576784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.088 [2024-07-24 20:02:55.576790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.088 [2024-07-24 20:02:55.576806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.088 qpair failed and we were unable to recover it. 00:27:04.088 [2024-07-24 20:02:55.586719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.088 [2024-07-24 20:02:55.586857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.088 [2024-07-24 20:02:55.586877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.088 [2024-07-24 20:02:55.586884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.088 [2024-07-24 20:02:55.586889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.088 [2024-07-24 20:02:55.586906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.088 qpair failed and we were unable to recover it. 00:27:04.088 [2024-07-24 20:02:55.596808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.088 [2024-07-24 20:02:55.596991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.089 [2024-07-24 20:02:55.597009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.089 [2024-07-24 20:02:55.597015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.089 [2024-07-24 20:02:55.597021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.089 [2024-07-24 20:02:55.597038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.089 qpair failed and we were unable to recover it. 00:27:04.089 [2024-07-24 20:02:55.606750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.089 [2024-07-24 20:02:55.606886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.089 [2024-07-24 20:02:55.606903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.089 [2024-07-24 20:02:55.606910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.089 [2024-07-24 20:02:55.606916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.089 [2024-07-24 20:02:55.606932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.089 qpair failed and we were unable to recover it. 00:27:04.089 [2024-07-24 20:02:55.616730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.089 [2024-07-24 20:02:55.616865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.089 [2024-07-24 20:02:55.616882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.089 [2024-07-24 20:02:55.616889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.089 [2024-07-24 20:02:55.616894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.089 [2024-07-24 20:02:55.616911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.089 qpair failed and we were unable to recover it. 00:27:04.089 [2024-07-24 20:02:55.626823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.089 [2024-07-24 20:02:55.626959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.089 [2024-07-24 20:02:55.626976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.089 [2024-07-24 20:02:55.626982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.089 [2024-07-24 20:02:55.626988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.089 [2024-07-24 20:02:55.627008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.089 qpair failed and we were unable to recover it. 00:27:04.089 [2024-07-24 20:02:55.636772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.089 [2024-07-24 20:02:55.636908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.089 [2024-07-24 20:02:55.636925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.089 [2024-07-24 20:02:55.636931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.089 [2024-07-24 20:02:55.636937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.089 [2024-07-24 20:02:55.636954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.089 qpair failed and we were unable to recover it. 00:27:04.089 [2024-07-24 20:02:55.646803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.089 [2024-07-24 20:02:55.646933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.089 [2024-07-24 20:02:55.646950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.089 [2024-07-24 20:02:55.646957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.089 [2024-07-24 20:02:55.646962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.089 [2024-07-24 20:02:55.646979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.089 qpair failed and we were unable to recover it. 00:27:04.089 [2024-07-24 20:02:55.656907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.089 [2024-07-24 20:02:55.657067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.089 [2024-07-24 20:02:55.657084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.089 [2024-07-24 20:02:55.657091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.089 [2024-07-24 20:02:55.657097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.089 [2024-07-24 20:02:55.657114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.089 qpair failed and we were unable to recover it. 00:27:04.089 [2024-07-24 20:02:55.666845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.089 [2024-07-24 20:02:55.666974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.089 [2024-07-24 20:02:55.666991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.089 [2024-07-24 20:02:55.666999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.089 [2024-07-24 20:02:55.667006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.089 [2024-07-24 20:02:55.667022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.089 qpair failed and we were unable to recover it. 00:27:04.089 [2024-07-24 20:02:55.676961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.089 [2024-07-24 20:02:55.677098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.089 [2024-07-24 20:02:55.677119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.089 [2024-07-24 20:02:55.677125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.089 [2024-07-24 20:02:55.677131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.089 [2024-07-24 20:02:55.677147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.089 qpair failed and we were unable to recover it. 00:27:04.350 [2024-07-24 20:02:55.686989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.350 [2024-07-24 20:02:55.687138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.350 [2024-07-24 20:02:55.687155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.350 [2024-07-24 20:02:55.687162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.350 [2024-07-24 20:02:55.687168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.350 [2024-07-24 20:02:55.687184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-07-24 20:02:55.697050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.350 [2024-07-24 20:02:55.697195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.350 [2024-07-24 20:02:55.697212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.350 [2024-07-24 20:02:55.697219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.350 [2024-07-24 20:02:55.697225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.350 [2024-07-24 20:02:55.697242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-07-24 20:02:55.707049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.350 [2024-07-24 20:02:55.707212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.350 [2024-07-24 20:02:55.707229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.350 [2024-07-24 20:02:55.707236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.350 [2024-07-24 20:02:55.707242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.350 [2024-07-24 20:02:55.707259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-07-24 20:02:55.717092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.350 [2024-07-24 20:02:55.717227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.350 [2024-07-24 20:02:55.717244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.350 [2024-07-24 20:02:55.717251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.350 [2024-07-24 20:02:55.717257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.350 [2024-07-24 20:02:55.717277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-07-24 20:02:55.727028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.350 [2024-07-24 20:02:55.727167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.350 [2024-07-24 20:02:55.727185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.350 [2024-07-24 20:02:55.727191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.350 [2024-07-24 20:02:55.727197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.350 [2024-07-24 20:02:55.727214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-07-24 20:02:55.737144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.350 [2024-07-24 20:02:55.737275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.350 [2024-07-24 20:02:55.737292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.350 [2024-07-24 20:02:55.737299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.350 [2024-07-24 20:02:55.737304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.350 [2024-07-24 20:02:55.737321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-07-24 20:02:55.747160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.350 [2024-07-24 20:02:55.747296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.350 [2024-07-24 20:02:55.747313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.350 [2024-07-24 20:02:55.747320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.350 [2024-07-24 20:02:55.747326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.350 [2024-07-24 20:02:55.747344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-07-24 20:02:55.757137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.350 [2024-07-24 20:02:55.757273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.350 [2024-07-24 20:02:55.757291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.350 [2024-07-24 20:02:55.757298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.350 [2024-07-24 20:02:55.757304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.350 [2024-07-24 20:02:55.757321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-07-24 20:02:55.767238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.351 [2024-07-24 20:02:55.767378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.351 [2024-07-24 20:02:55.767395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.351 [2024-07-24 20:02:55.767401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.351 [2024-07-24 20:02:55.767407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.351 [2024-07-24 20:02:55.767424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-07-24 20:02:55.777207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.351 [2024-07-24 20:02:55.777381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.351 [2024-07-24 20:02:55.777398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.351 [2024-07-24 20:02:55.777405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.351 [2024-07-24 20:02:55.777411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.351 [2024-07-24 20:02:55.777428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-07-24 20:02:55.787292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.351 [2024-07-24 20:02:55.787429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.351 [2024-07-24 20:02:55.787446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.351 [2024-07-24 20:02:55.787452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.351 [2024-07-24 20:02:55.787458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.351 [2024-07-24 20:02:55.787474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-07-24 20:02:55.797292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.351 [2024-07-24 20:02:55.797428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.351 [2024-07-24 20:02:55.797444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.351 [2024-07-24 20:02:55.797451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.351 [2024-07-24 20:02:55.797457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.351 [2024-07-24 20:02:55.797474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-07-24 20:02:55.807315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.351 [2024-07-24 20:02:55.807453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.351 [2024-07-24 20:02:55.807472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.351 [2024-07-24 20:02:55.807479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.351 [2024-07-24 20:02:55.807491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.351 [2024-07-24 20:02:55.807508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-07-24 20:02:55.817376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.351 [2024-07-24 20:02:55.817523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.351 [2024-07-24 20:02:55.817540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.351 [2024-07-24 20:02:55.817547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.351 [2024-07-24 20:02:55.817553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.351 [2024-07-24 20:02:55.817569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-07-24 20:02:55.827393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.351 [2024-07-24 20:02:55.827556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.351 [2024-07-24 20:02:55.827572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.351 [2024-07-24 20:02:55.827579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.351 [2024-07-24 20:02:55.827585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.351 [2024-07-24 20:02:55.827602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-07-24 20:02:55.837360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.351 [2024-07-24 20:02:55.837499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.351 [2024-07-24 20:02:55.837516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.351 [2024-07-24 20:02:55.837523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.351 [2024-07-24 20:02:55.837528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.351 [2024-07-24 20:02:55.837545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-07-24 20:02:55.847458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.351 [2024-07-24 20:02:55.847589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.351 [2024-07-24 20:02:55.847606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.351 [2024-07-24 20:02:55.847613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.351 [2024-07-24 20:02:55.847618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.351 [2024-07-24 20:02:55.847635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-07-24 20:02:55.857487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.351 [2024-07-24 20:02:55.857624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.351 [2024-07-24 20:02:55.857641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.351 [2024-07-24 20:02:55.857647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.351 [2024-07-24 20:02:55.857653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.351 [2024-07-24 20:02:55.857670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-07-24 20:02:55.867519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.351 [2024-07-24 20:02:55.867656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.351 [2024-07-24 20:02:55.867672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.351 [2024-07-24 20:02:55.867678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.351 [2024-07-24 20:02:55.867684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.351 [2024-07-24 20:02:55.867701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-07-24 20:02:55.877555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.351 [2024-07-24 20:02:55.877695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.351 [2024-07-24 20:02:55.877711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.351 [2024-07-24 20:02:55.877718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.351 [2024-07-24 20:02:55.877724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.352 [2024-07-24 20:02:55.877740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-07-24 20:02:55.887576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.352 [2024-07-24 20:02:55.887712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.352 [2024-07-24 20:02:55.887728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.352 [2024-07-24 20:02:55.887735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.352 [2024-07-24 20:02:55.887741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.352 [2024-07-24 20:02:55.887757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-07-24 20:02:55.897617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.352 [2024-07-24 20:02:55.897757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.352 [2024-07-24 20:02:55.897773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.352 [2024-07-24 20:02:55.897783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.352 [2024-07-24 20:02:55.897789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.352 [2024-07-24 20:02:55.897805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-07-24 20:02:55.907626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.352 [2024-07-24 20:02:55.907762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.352 [2024-07-24 20:02:55.907778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.352 [2024-07-24 20:02:55.907785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.352 [2024-07-24 20:02:55.907791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.352 [2024-07-24 20:02:55.907808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-07-24 20:02:55.917673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.352 [2024-07-24 20:02:55.917808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.352 [2024-07-24 20:02:55.917825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.352 [2024-07-24 20:02:55.917832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.352 [2024-07-24 20:02:55.917837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.352 [2024-07-24 20:02:55.917854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-07-24 20:02:55.927694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.352 [2024-07-24 20:02:55.927833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.352 [2024-07-24 20:02:55.927850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.352 [2024-07-24 20:02:55.927857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.352 [2024-07-24 20:02:55.927862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.352 [2024-07-24 20:02:55.927879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-07-24 20:02:55.937754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.352 [2024-07-24 20:02:55.937890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.352 [2024-07-24 20:02:55.937907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.352 [2024-07-24 20:02:55.937914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.352 [2024-07-24 20:02:55.937920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.352 [2024-07-24 20:02:55.937936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-24 20:02:55.947744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.612 [2024-07-24 20:02:55.947888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.612 [2024-07-24 20:02:55.947905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.612 [2024-07-24 20:02:55.947912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.612 [2024-07-24 20:02:55.947917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.612 [2024-07-24 20:02:55.947934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-24 20:02:55.957793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.612 [2024-07-24 20:02:55.957931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.612 [2024-07-24 20:02:55.957947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.612 [2024-07-24 20:02:55.957954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.612 [2024-07-24 20:02:55.957960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.612 [2024-07-24 20:02:55.957976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-24 20:02:55.967866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.612 [2024-07-24 20:02:55.968024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.612 [2024-07-24 20:02:55.968040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.612 [2024-07-24 20:02:55.968052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.612 [2024-07-24 20:02:55.968058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.612 [2024-07-24 20:02:55.968075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-24 20:02:55.977829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.612 [2024-07-24 20:02:55.977963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.612 [2024-07-24 20:02:55.977980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.612 [2024-07-24 20:02:55.977987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.612 [2024-07-24 20:02:55.977992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.612 [2024-07-24 20:02:55.978009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-24 20:02:55.987788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.612 [2024-07-24 20:02:55.987921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.612 [2024-07-24 20:02:55.987938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.612 [2024-07-24 20:02:55.987948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.612 [2024-07-24 20:02:55.987954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e8000b90 00:27:04.612 [2024-07-24 20:02:55.987970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-24 20:02:55.997938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.612 [2024-07-24 20:02:55.998133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.612 [2024-07-24 20:02:55.998161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.612 [2024-07-24 20:02:55.998173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.612 [2024-07-24 20:02:55.998182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2d8000b90 00:27:04.612 [2024-07-24 20:02:55.998207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-24 20:02:56.007918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.612 [2024-07-24 20:02:56.008064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.612 [2024-07-24 20:02:56.008083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.612 [2024-07-24 20:02:56.008090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.612 [2024-07-24 20:02:56.008096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2d8000b90 00:27:04.612 [2024-07-24 20:02:56.008115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-24 20:02:56.017936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.613 [2024-07-24 20:02:56.018076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.613 [2024-07-24 20:02:56.018093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.613 [2024-07-24 20:02:56.018100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.613 [2024-07-24 20:02:56.018106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2d8000b90 00:27:04.613 [2024-07-24 20:02:56.018123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-24 20:02:56.027958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.613 [2024-07-24 20:02:56.028107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.613 [2024-07-24 20:02:56.028129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.613 [2024-07-24 20:02:56.028137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.613 [2024-07-24 20:02:56.028144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e0000b90 00:27:04.613 [2024-07-24 20:02:56.028163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-24 20:02:56.038017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.613 [2024-07-24 20:02:56.038160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.613 [2024-07-24 20:02:56.038178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.613 [2024-07-24 20:02:56.038186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.613 [2024-07-24 20:02:56.038192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb2e0000b90 00:27:04.613 [2024-07-24 20:02:56.038210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-24 20:02:56.038358] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:04.613 A controller has encountered a failure and is being reset. 00:27:04.613 [2024-07-24 20:02:56.048081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.613 [2024-07-24 20:02:56.048222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.613 [2024-07-24 20:02:56.048250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.613 [2024-07-24 20:02:56.048261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.613 [2024-07-24 20:02:56.048269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e54f30 00:27:04.613 [2024-07-24 20:02:56.048294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-24 20:02:56.058253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.613 [2024-07-24 20:02:56.058390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.613 [2024-07-24 20:02:56.058408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.613 [2024-07-24 20:02:56.058415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.613 [2024-07-24 20:02:56.058421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e54f30 00:27:04.613 [2024-07-24 20:02:56.058439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 Controller properly reset. 00:27:04.613 Initializing NVMe Controllers 00:27:04.613 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:04.613 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:04.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:04.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:04.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:04.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:04.613 Initialization complete. Launching workers. 00:27:04.613 Starting thread on core 1 00:27:04.613 Starting thread on core 2 00:27:04.613 Starting thread on core 3 00:27:04.613 Starting thread on core 0 00:27:04.613 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:04.613 00:27:04.613 real 0m11.405s 00:27:04.613 user 0m20.450s 00:27:04.613 sys 0m4.338s 00:27:04.613 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:04.613 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.613 ************************************ 00:27:04.613 END TEST nvmf_target_disconnect_tc2 00:27:04.613 ************************************ 00:27:04.613 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:04.613 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:04.613 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:04.613 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:04.613 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:04.873 rmmod nvme_tcp 00:27:04.873 rmmod nvme_fabrics 00:27:04.873 rmmod nvme_keyring 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2206914 ']' 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2206914 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2206914 ']' 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2206914 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2206914 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2206914' 00:27:04.873 killing process with pid 2206914 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2206914 00:27:04.873 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2206914 00:27:05.134 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:05.134 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:05.134 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:05.134 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.134 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:05.134 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.134 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.134 20:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.042 20:02:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:07.042 00:27:07.042 real 0m18.906s 00:27:07.042 user 0m47.600s 00:27:07.042 sys 0m8.422s 00:27:07.042 20:02:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:07.042 20:02:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:07.042 ************************************ 00:27:07.042 END TEST nvmf_target_disconnect 00:27:07.042 ************************************ 00:27:07.042 20:02:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:07.042 00:27:07.042 real 5m45.361s 00:27:07.042 user 10m53.420s 00:27:07.042 sys 1m44.708s 00:27:07.042 20:02:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:07.042 20:02:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.042 ************************************ 00:27:07.042 END TEST nvmf_host 00:27:07.042 ************************************ 00:27:07.042 00:27:07.042 real 21m3.415s 00:27:07.042 user 45m38.423s 00:27:07.042 sys 6m13.119s 00:27:07.042 20:02:58 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:07.042 20:02:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.042 ************************************ 00:27:07.042 END TEST nvmf_tcp 00:27:07.042 ************************************ 00:27:07.303 20:02:58 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:27:07.303 20:02:58 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:07.303 20:02:58 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:07.303 20:02:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:07.303 20:02:58 -- common/autotest_common.sh@10 -- # set +x 00:27:07.303 ************************************ 00:27:07.303 START TEST spdkcli_nvmf_tcp 00:27:07.303 ************************************ 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:07.303 * Looking for test storage... 00:27:07.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2208535 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2208535 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2208535 ']' 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:07.303 20:02:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.303 [2024-07-24 20:02:58.853679] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:27:07.303 [2024-07-24 20:02:58.853727] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2208535 ] 00:27:07.303 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.563 [2024-07-24 20:02:58.907881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:07.563 [2024-07-24 20:02:58.980689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.563 [2024-07-24 20:02:58.980692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.133 20:02:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:08.133 20:02:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:27:08.133 20:02:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:08.133 20:02:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:08.133 20:02:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:08.133 20:02:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:08.133 20:02:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:08.133 20:02:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:08.133 20:02:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:08.133 20:02:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:08.133 20:02:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:08.133 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:08.133 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:08.133 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:08.133 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:08.133 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:08.133 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:08.133 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:08.133 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:08.133 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:08.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:08.133 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:08.133 ' 00:27:10.675 [2024-07-24 20:03:02.072251] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.056 [2024-07-24 20:03:03.248180] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:13.965 [2024-07-24 20:03:05.418952] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:15.875 [2024-07-24 20:03:07.280786] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:17.257 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:17.257 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:17.257 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:17.257 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:17.257 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:17.257 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:17.257 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:17.257 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:17.257 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:17.257 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:17.257 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:17.257 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:17.257 20:03:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:17.257 20:03:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:17.257 20:03:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.516 20:03:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:17.516 20:03:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:17.516 20:03:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.516 20:03:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:17.516 20:03:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:17.776 20:03:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:17.776 20:03:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:17.776 20:03:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:17.776 20:03:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:17.776 20:03:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.776 20:03:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:17.776 20:03:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:17.776 20:03:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.776 20:03:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:17.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:17.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:17.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:17.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:17.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:17.776 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:17.776 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:17.776 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:17.776 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:17.776 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:17.776 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:17.776 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:17.776 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:17.776 ' 00:27:23.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:23.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:23.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:23.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:23.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:23.065 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:23.065 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:23.065 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:23.065 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:23.065 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:23.065 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:23.065 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:23.065 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:23.065 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2208535 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2208535 ']' 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2208535 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2208535 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2208535' 00:27:23.065 killing process with pid 2208535 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2208535 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2208535 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2208535 ']' 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2208535 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2208535 ']' 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2208535 00:27:23.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2208535) - No such process 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2208535 is not found' 00:27:23.065 Process with pid 2208535 is not found 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:23.065 00:27:23.065 real 0m15.827s 00:27:23.065 user 0m32.822s 00:27:23.065 sys 0m0.727s 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:23.065 20:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:23.065 ************************************ 00:27:23.065 END TEST spdkcli_nvmf_tcp 00:27:23.065 ************************************ 00:27:23.065 20:03:14 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:23.065 20:03:14 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:23.065 20:03:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:23.065 20:03:14 -- common/autotest_common.sh@10 -- # set +x 00:27:23.065 ************************************ 00:27:23.065 START TEST nvmf_identify_passthru 00:27:23.065 ************************************ 00:27:23.065 20:03:14 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:23.065 * Looking for test storage... 00:27:23.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:23.065 20:03:14 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:23.065 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.325 20:03:14 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.325 20:03:14 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.325 20:03:14 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.325 20:03:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.325 20:03:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.325 20:03:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.325 20:03:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:23.325 20:03:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:23.325 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:23.325 20:03:14 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.325 20:03:14 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.325 20:03:14 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.325 20:03:14 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.326 20:03:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.326 20:03:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.326 20:03:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.326 20:03:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:23.326 20:03:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.326 20:03:14 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:23.326 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:23.326 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:23.326 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:23.326 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:23.326 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:23.326 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.326 20:03:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:23.326 20:03:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.326 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:23.326 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:23.326 20:03:14 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:23.326 20:03:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:28.608 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:28.608 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:28.608 Found net devices under 0000:86:00.0: cvl_0_0 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:28.608 Found net devices under 0000:86:00.1: cvl_0_1 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:28.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:27:28.608 00:27:28.608 --- 10.0.0.2 ping statistics --- 00:27:28.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.608 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:28.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:27:28.608 00:27:28.608 --- 10.0.0.1 ping statistics --- 00:27:28.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.608 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:28.608 20:03:19 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:28.609 20:03:19 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:28.609 20:03:19 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:28.609 20:03:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:28.609 20:03:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:28.609 20:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:27:28.609 20:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:27:28.609 20:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:27:28.609 20:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:27:28.609 20:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:27:28.609 20:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:27:28.609 20:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:28.609 20:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:28.609 20:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:27:28.609 20:03:20 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:27:28.609 20:03:20 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:27:28.609 20:03:20 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:27:28.609 20:03:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:27:28.609 20:03:20 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:27:28.609 20:03:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:28.609 20:03:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:28.609 20:03:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:28.609 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.854 20:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:27:32.854 20:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:32.854 20:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:32.854 20:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:32.854 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.125 20:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:37.125 20:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:37.125 20:03:28 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:37.125 20:03:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.125 20:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:37.125 20:03:28 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:37.125 20:03:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.125 20:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2215937 00:27:37.125 20:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:37.125 20:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:37.125 20:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2215937 00:27:37.125 20:03:28 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2215937 ']' 00:27:37.125 20:03:28 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.125 20:03:28 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:37.125 20:03:28 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.125 20:03:28 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:37.125 20:03:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.125 [2024-07-24 20:03:28.331108] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:27:37.125 [2024-07-24 20:03:28.331154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.125 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.125 [2024-07-24 20:03:28.389667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:37.125 [2024-07-24 20:03:28.471081] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.125 [2024-07-24 20:03:28.471117] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.125 [2024-07-24 20:03:28.471124] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.125 [2024-07-24 20:03:28.471130] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.125 [2024-07-24 20:03:28.471135] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.125 [2024-07-24 20:03:28.471186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.125 [2024-07-24 20:03:28.471281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.125 [2024-07-24 20:03:28.471364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.125 [2024-07-24 20:03:28.471365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.694 20:03:29 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:37.694 20:03:29 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:27:37.694 20:03:29 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:37.694 20:03:29 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.694 20:03:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.694 INFO: Log level set to 20 00:27:37.694 INFO: Requests: 00:27:37.694 { 00:27:37.694 "jsonrpc": "2.0", 00:27:37.694 "method": "nvmf_set_config", 00:27:37.694 "id": 1, 00:27:37.694 "params": { 00:27:37.694 "admin_cmd_passthru": { 00:27:37.694 "identify_ctrlr": true 00:27:37.694 } 00:27:37.694 } 00:27:37.694 } 00:27:37.694 00:27:37.694 INFO: response: 00:27:37.694 { 00:27:37.694 "jsonrpc": "2.0", 00:27:37.694 "id": 1, 00:27:37.694 "result": true 00:27:37.694 } 00:27:37.694 00:27:37.694 20:03:29 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.694 20:03:29 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:37.694 20:03:29 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.694 20:03:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.694 INFO: Setting log level to 20 00:27:37.694 INFO: Setting log level to 20 00:27:37.694 INFO: Log level set to 20 00:27:37.694 INFO: Log level set to 20 00:27:37.694 INFO: Requests: 00:27:37.694 { 00:27:37.694 "jsonrpc": "2.0", 00:27:37.694 "method": "framework_start_init", 00:27:37.694 "id": 1 00:27:37.694 } 00:27:37.694 00:27:37.694 INFO: Requests: 00:27:37.694 { 00:27:37.694 "jsonrpc": "2.0", 00:27:37.694 "method": "framework_start_init", 00:27:37.694 "id": 1 00:27:37.694 } 00:27:37.694 00:27:37.694 [2024-07-24 20:03:29.226894] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:37.694 INFO: response: 00:27:37.694 { 00:27:37.694 "jsonrpc": "2.0", 00:27:37.694 "id": 1, 00:27:37.694 "result": true 00:27:37.694 } 00:27:37.694 00:27:37.694 INFO: response: 00:27:37.694 { 00:27:37.694 "jsonrpc": "2.0", 00:27:37.694 "id": 1, 00:27:37.694 "result": true 00:27:37.694 } 00:27:37.694 00:27:37.694 20:03:29 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.694 20:03:29 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:37.694 20:03:29 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.694 20:03:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.694 INFO: Setting log level to 40 00:27:37.694 INFO: Setting log level to 40 00:27:37.694 INFO: Setting log level to 40 00:27:37.694 [2024-07-24 20:03:29.240249] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.694 20:03:29 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.694 20:03:29 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:37.694 20:03:29 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:37.694 20:03:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.694 20:03:29 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:27:37.694 20:03:29 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.694 20:03:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:40.985 Nvme0n1 00:27:40.985 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:40.986 [2024-07-24 20:03:32.139916] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:40.986 [ 00:27:40.986 { 00:27:40.986 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:40.986 "subtype": "Discovery", 00:27:40.986 "listen_addresses": [], 00:27:40.986 "allow_any_host": true, 00:27:40.986 "hosts": [] 00:27:40.986 }, 00:27:40.986 { 00:27:40.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.986 "subtype": "NVMe", 00:27:40.986 "listen_addresses": [ 00:27:40.986 { 00:27:40.986 "trtype": "TCP", 00:27:40.986 "adrfam": "IPv4", 00:27:40.986 "traddr": "10.0.0.2", 00:27:40.986 "trsvcid": "4420" 00:27:40.986 } 00:27:40.986 ], 00:27:40.986 "allow_any_host": true, 00:27:40.986 "hosts": [], 00:27:40.986 "serial_number": "SPDK00000000000001", 00:27:40.986 "model_number": "SPDK bdev Controller", 00:27:40.986 "max_namespaces": 1, 00:27:40.986 "min_cntlid": 1, 00:27:40.986 "max_cntlid": 65519, 00:27:40.986 "namespaces": [ 00:27:40.986 { 00:27:40.986 "nsid": 1, 00:27:40.986 "bdev_name": "Nvme0n1", 00:27:40.986 "name": "Nvme0n1", 00:27:40.986 "nguid": "D5E3A817B079426FB1D41B9EE16DC1FF", 00:27:40.986 "uuid": "d5e3a817-b079-426f-b1d4-1b9ee16dc1ff" 00:27:40.986 } 00:27:40.986 ] 00:27:40.986 } 00:27:40.986 ] 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:40.986 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:40.986 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:40.986 20:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:40.986 20:03:32 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:40.986 20:03:32 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:40.986 20:03:32 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:40.986 20:03:32 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:40.986 20:03:32 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:40.986 20:03:32 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:40.986 rmmod nvme_tcp 00:27:40.986 rmmod nvme_fabrics 00:27:40.986 rmmod nvme_keyring 00:27:40.986 20:03:32 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:40.986 20:03:32 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:40.986 20:03:32 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:40.986 20:03:32 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2215937 ']' 00:27:40.986 20:03:32 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2215937 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2215937 ']' 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2215937 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2215937 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2215937' 00:27:40.986 killing process with pid 2215937 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2215937 00:27:40.986 20:03:32 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2215937 00:27:42.369 20:03:33 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:42.369 20:03:33 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:42.369 20:03:33 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:42.369 20:03:33 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:42.369 20:03:33 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:42.369 20:03:33 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.370 20:03:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:42.370 20:03:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.907 20:03:36 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:44.907 00:27:44.907 real 0m21.444s 00:27:44.907 user 0m29.349s 00:27:44.907 sys 0m4.642s 00:27:44.907 20:03:36 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:44.907 20:03:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:44.907 ************************************ 00:27:44.907 END TEST nvmf_identify_passthru 00:27:44.907 ************************************ 00:27:44.907 20:03:36 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:44.907 20:03:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:44.907 20:03:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:44.907 20:03:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.907 ************************************ 00:27:44.907 START TEST nvmf_dif 00:27:44.907 ************************************ 00:27:44.907 20:03:36 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:44.907 * Looking for test storage... 00:27:44.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:44.907 20:03:36 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.907 20:03:36 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.907 20:03:36 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.907 20:03:36 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.907 20:03:36 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.907 20:03:36 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.907 20:03:36 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.907 20:03:36 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:44.907 20:03:36 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:44.907 20:03:36 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:44.907 20:03:36 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:44.907 20:03:36 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:44.907 20:03:36 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:44.907 20:03:36 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.907 20:03:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:44.907 20:03:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:44.907 20:03:36 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:44.907 20:03:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:50.189 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:50.189 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:50.189 Found net devices under 0000:86:00.0: cvl_0_0 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:50.189 20:03:41 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:50.190 Found net devices under 0000:86:00.1: cvl_0_1 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:50.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:27:50.190 00:27:50.190 --- 10.0.0.2 ping statistics --- 00:27:50.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.190 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:50.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:27:50.190 00:27:50.190 --- 10.0.0.1 ping statistics --- 00:27:50.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.190 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:50.190 20:03:41 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:52.732 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:52.732 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:52.732 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:52.732 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:52.732 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:52.732 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:52.732 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:52.732 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:52.732 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:52.732 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:52.732 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:52.732 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:52.732 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:52.732 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:52.732 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:52.732 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:52.732 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:52.732 20:03:44 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.732 20:03:44 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:52.732 20:03:44 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:52.732 20:03:44 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.732 20:03:44 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:52.732 20:03:44 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:52.732 20:03:44 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:52.732 20:03:44 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:52.732 20:03:44 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:52.732 20:03:44 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:52.732 20:03:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:52.732 20:03:44 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2221463 00:27:52.732 20:03:44 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2221463 00:27:52.732 20:03:44 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2221463 ']' 00:27:52.733 20:03:44 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.733 20:03:44 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:52.733 20:03:44 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.733 20:03:44 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:52.733 20:03:44 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:52.733 20:03:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:52.733 [2024-07-24 20:03:44.232353] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:27:52.733 [2024-07-24 20:03:44.232399] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.733 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.733 [2024-07-24 20:03:44.290437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.992 [2024-07-24 20:03:44.371819] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.992 [2024-07-24 20:03:44.371853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.992 [2024-07-24 20:03:44.371861] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.992 [2024-07-24 20:03:44.371867] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.992 [2024-07-24 20:03:44.371872] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.992 [2024-07-24 20:03:44.371890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.562 20:03:45 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:53.562 20:03:45 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:27:53.562 20:03:45 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:53.562 20:03:45 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:53.562 20:03:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:53.562 20:03:45 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.562 20:03:45 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:53.562 20:03:45 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:53.562 20:03:45 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.562 20:03:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:53.562 [2024-07-24 20:03:45.071681] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.562 20:03:45 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.562 20:03:45 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:53.562 20:03:45 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:53.562 20:03:45 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:53.562 20:03:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:53.562 ************************************ 00:27:53.562 START TEST fio_dif_1_default 00:27:53.562 ************************************ 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:53.562 bdev_null0 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:53.562 [2024-07-24 20:03:45.135955] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.562 { 00:27:53.562 "params": { 00:27:53.562 "name": "Nvme$subsystem", 00:27:53.562 "trtype": "$TEST_TRANSPORT", 00:27:53.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.562 "adrfam": "ipv4", 00:27:53.562 "trsvcid": "$NVMF_PORT", 00:27:53.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.562 "hdgst": ${hdgst:-false}, 00:27:53.562 "ddgst": ${ddgst:-false} 00:27:53.562 }, 00:27:53.562 "method": "bdev_nvme_attach_controller" 00:27:53.562 } 00:27:53.562 EOF 00:27:53.562 )") 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:53.562 20:03:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:53.562 "params": { 00:27:53.562 "name": "Nvme0", 00:27:53.562 "trtype": "tcp", 00:27:53.562 "traddr": "10.0.0.2", 00:27:53.562 "adrfam": "ipv4", 00:27:53.562 "trsvcid": "4420", 00:27:53.562 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:53.562 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:53.562 "hdgst": false, 00:27:53.562 "ddgst": false 00:27:53.562 }, 00:27:53.562 "method": "bdev_nvme_attach_controller" 00:27:53.562 }' 00:27:53.852 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:53.852 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:53.852 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:53.852 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:53.852 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:53.852 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:53.852 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:53.852 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:53.852 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:53.852 20:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.111 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:54.111 fio-3.35 00:27:54.111 Starting 1 thread 00:27:54.111 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.367 00:28:06.367 filename0: (groupid=0, jobs=1): err= 0: pid=2221902: Wed Jul 24 20:03:56 2024 00:28:06.367 read: IOPS=95, BW=380KiB/s (389kB/s)(3808KiB/10020msec) 00:28:06.367 slat (nsec): min=2874, max=42222, avg=6212.33, stdev=1412.89 00:28:06.367 clat (usec): min=41798, max=46106, avg=42079.62, stdev=378.43 00:28:06.367 lat (usec): min=41810, max=46116, avg=42085.83, stdev=378.35 00:28:06.367 clat percentiles (usec): 00:28:06.367 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:28:06.367 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:28:06.367 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:28:06.367 | 99.00th=[43254], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:28:06.367 | 99.99th=[45876] 00:28:06.367 bw ( KiB/s): min= 352, max= 384, per=99.73%, avg=379.20, stdev=11.72, samples=20 00:28:06.367 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:28:06.367 lat (msec) : 50=100.00% 00:28:06.367 cpu : usr=94.84%, sys=4.90%, ctx=15, majf=0, minf=161 00:28:06.367 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:06.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.367 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.367 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:06.367 00:28:06.367 Run status group 0 (all jobs): 00:28:06.367 READ: bw=380KiB/s (389kB/s), 380KiB/s-380KiB/s (389kB/s-389kB/s), io=3808KiB (3899kB), run=10020-10020msec 00:28:06.367 20:03:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:06.367 20:03:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:06.367 20:03:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:06.367 20:03:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:06.367 20:03:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:06.367 20:03:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:06.367 20:03:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.367 20:03:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:06.367 20:03:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.367 20:03:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:06.367 20:03:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.367 20:03:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:06.367 20:03:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.367 00:28:06.367 real 0m11.251s 00:28:06.367 user 0m15.503s 00:28:06.367 sys 0m0.763s 00:28:06.367 20:03:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:06.368 ************************************ 00:28:06.368 END TEST fio_dif_1_default 00:28:06.368 ************************************ 00:28:06.368 20:03:56 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:06.368 20:03:56 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:06.368 20:03:56 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:06.368 20:03:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:06.368 ************************************ 00:28:06.368 START TEST fio_dif_1_multi_subsystems 00:28:06.368 ************************************ 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:06.368 bdev_null0 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:06.368 [2024-07-24 20:03:56.455812] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:06.368 bdev_null1 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.368 { 00:28:06.368 "params": { 00:28:06.368 "name": "Nvme$subsystem", 00:28:06.368 "trtype": "$TEST_TRANSPORT", 00:28:06.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.368 "adrfam": "ipv4", 00:28:06.368 "trsvcid": "$NVMF_PORT", 00:28:06.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.368 "hdgst": ${hdgst:-false}, 00:28:06.368 "ddgst": ${ddgst:-false} 00:28:06.368 }, 00:28:06.368 "method": "bdev_nvme_attach_controller" 00:28:06.368 } 00:28:06.368 EOF 00:28:06.368 )") 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.368 { 00:28:06.368 "params": { 00:28:06.368 "name": "Nvme$subsystem", 00:28:06.368 "trtype": "$TEST_TRANSPORT", 00:28:06.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.368 "adrfam": "ipv4", 00:28:06.368 "trsvcid": "$NVMF_PORT", 00:28:06.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.368 "hdgst": ${hdgst:-false}, 00:28:06.368 "ddgst": ${ddgst:-false} 00:28:06.368 }, 00:28:06.368 "method": "bdev_nvme_attach_controller" 00:28:06.368 } 00:28:06.368 EOF 00:28:06.368 )") 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:06.368 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:06.368 "params": { 00:28:06.368 "name": "Nvme0", 00:28:06.368 "trtype": "tcp", 00:28:06.368 "traddr": "10.0.0.2", 00:28:06.368 "adrfam": "ipv4", 00:28:06.368 "trsvcid": "4420", 00:28:06.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.368 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:06.368 "hdgst": false, 00:28:06.368 "ddgst": false 00:28:06.368 }, 00:28:06.368 "method": "bdev_nvme_attach_controller" 00:28:06.368 },{ 00:28:06.368 "params": { 00:28:06.368 "name": "Nvme1", 00:28:06.368 "trtype": "tcp", 00:28:06.368 "traddr": "10.0.0.2", 00:28:06.368 "adrfam": "ipv4", 00:28:06.368 "trsvcid": "4420", 00:28:06.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:06.369 "hdgst": false, 00:28:06.369 "ddgst": false 00:28:06.369 }, 00:28:06.369 "method": "bdev_nvme_attach_controller" 00:28:06.369 }' 00:28:06.369 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:06.369 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:06.369 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:06.369 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:06.369 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:06.369 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:06.369 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:06.369 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:06.369 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:06.369 20:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.369 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:06.369 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:06.369 fio-3.35 00:28:06.369 Starting 2 threads 00:28:06.369 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.356 00:28:16.356 filename0: (groupid=0, jobs=1): err= 0: pid=2223868: Wed Jul 24 20:04:07 2024 00:28:16.356 read: IOPS=94, BW=380KiB/s (389kB/s)(3808KiB/10024msec) 00:28:16.356 slat (nsec): min=6114, max=29813, avg=7825.73, stdev=2712.03 00:28:16.356 clat (usec): min=41847, max=44810, avg=42093.93, stdev=355.48 00:28:16.356 lat (usec): min=41853, max=44835, avg=42101.75, stdev=355.81 00:28:16.356 clat percentiles (usec): 00:28:16.356 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:28:16.356 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:28:16.356 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:28:16.356 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:28:16.356 | 99.99th=[44827] 00:28:16.356 bw ( KiB/s): min= 352, max= 384, per=49.88%, avg=379.20, stdev=11.72, samples=20 00:28:16.356 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:28:16.356 lat (msec) : 50=100.00% 00:28:16.356 cpu : usr=97.60%, sys=2.12%, ctx=22, majf=0, minf=111 00:28:16.356 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:16.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.356 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.356 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:16.356 filename1: (groupid=0, jobs=1): err= 0: pid=2223869: Wed Jul 24 20:04:07 2024 00:28:16.356 read: IOPS=95, BW=380KiB/s (389kB/s)(3808KiB/10019msec) 00:28:16.356 slat (nsec): min=6116, max=42141, avg=8018.91, stdev=3303.40 00:28:16.356 clat (usec): min=41846, max=44779, avg=42071.43, stdev=332.22 00:28:16.356 lat (usec): min=41852, max=44805, avg=42079.45, stdev=332.56 00:28:16.356 clat percentiles (usec): 00:28:16.356 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:28:16.356 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:28:16.356 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:28:16.356 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:28:16.356 | 99.99th=[44827] 00:28:16.356 bw ( KiB/s): min= 352, max= 384, per=49.88%, avg=379.20, stdev=11.72, samples=20 00:28:16.356 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:28:16.356 lat (msec) : 50=100.00% 00:28:16.356 cpu : usr=98.01%, sys=1.72%, ctx=11, majf=0, minf=180 00:28:16.356 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:16.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.356 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.356 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:16.356 00:28:16.356 Run status group 0 (all jobs): 00:28:16.356 READ: bw=760KiB/s (778kB/s), 380KiB/s-380KiB/s (389kB/s-389kB/s), io=7616KiB (7799kB), run=10019-10024msec 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.616 20:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.616 20:04:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.616 00:28:16.616 real 0m11.582s 00:28:16.616 user 0m26.770s 00:28:16.616 sys 0m0.745s 00:28:16.616 20:04:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:16.616 20:04:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.616 ************************************ 00:28:16.616 END TEST fio_dif_1_multi_subsystems 00:28:16.616 ************************************ 00:28:16.616 20:04:08 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:16.616 20:04:08 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:16.616 20:04:08 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:16.616 20:04:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:16.616 ************************************ 00:28:16.616 START TEST fio_dif_rand_params 00:28:16.616 ************************************ 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:16.617 bdev_null0 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:16.617 [2024-07-24 20:04:08.103708] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:16.617 { 00:28:16.617 "params": { 00:28:16.617 "name": "Nvme$subsystem", 00:28:16.617 "trtype": "$TEST_TRANSPORT", 00:28:16.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.617 "adrfam": "ipv4", 00:28:16.617 "trsvcid": "$NVMF_PORT", 00:28:16.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.617 "hdgst": ${hdgst:-false}, 00:28:16.617 "ddgst": ${ddgst:-false} 00:28:16.617 }, 00:28:16.617 "method": "bdev_nvme_attach_controller" 00:28:16.617 } 00:28:16.617 EOF 00:28:16.617 )") 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:16.617 "params": { 00:28:16.617 "name": "Nvme0", 00:28:16.617 "trtype": "tcp", 00:28:16.617 "traddr": "10.0.0.2", 00:28:16.617 "adrfam": "ipv4", 00:28:16.617 "trsvcid": "4420", 00:28:16.617 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:16.617 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:16.617 "hdgst": false, 00:28:16.617 "ddgst": false 00:28:16.617 }, 00:28:16.617 "method": "bdev_nvme_attach_controller" 00:28:16.617 }' 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:16.617 20:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:16.876 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:16.876 ... 00:28:16.876 fio-3.35 00:28:16.876 Starting 3 threads 00:28:17.136 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.707 00:28:23.707 filename0: (groupid=0, jobs=1): err= 0: pid=2225836: Wed Jul 24 20:04:14 2024 00:28:23.707 read: IOPS=244, BW=30.6MiB/s (32.1MB/s)(153MiB/5002msec) 00:28:23.707 slat (nsec): min=6168, max=27027, avg=8483.53, stdev=2559.30 00:28:23.707 clat (usec): min=5274, max=58238, avg=12245.82, stdev=12148.06 00:28:23.707 lat (usec): min=5281, max=58250, avg=12254.30, stdev=12148.17 00:28:23.707 clat percentiles (usec): 00:28:23.707 | 1.00th=[ 5800], 5.00th=[ 6128], 10.00th=[ 6259], 20.00th=[ 6783], 00:28:23.707 | 30.00th=[ 7242], 40.00th=[ 7701], 50.00th=[ 8225], 60.00th=[ 8848], 00:28:23.707 | 70.00th=[ 9634], 80.00th=[11469], 90.00th=[15926], 95.00th=[50594], 00:28:23.707 | 99.00th=[54789], 99.50th=[55313], 99.90th=[56361], 99.95th=[58459], 00:28:23.707 | 99.99th=[58459] 00:28:23.707 bw ( KiB/s): min=23808, max=45312, per=41.77%, avg=31257.60, stdev=6372.58, samples=10 00:28:23.707 iops : min= 186, max= 354, avg=244.20, stdev=49.79, samples=10 00:28:23.707 lat (msec) : 10=72.63%, 20=19.04%, 50=2.37%, 100=5.96% 00:28:23.707 cpu : usr=95.54%, sys=3.98%, ctx=9, majf=0, minf=51 00:28:23.707 IO depths : 1=4.1%, 2=95.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:23.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.707 issued rwts: total=1224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.707 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:23.707 filename0: (groupid=0, jobs=1): err= 0: pid=2225837: Wed Jul 24 20:04:14 2024 00:28:23.707 read: IOPS=206, BW=25.9MiB/s (27.1MB/s)(130MiB/5019msec) 00:28:23.707 slat (nsec): min=6225, max=25719, avg=8722.61, stdev=2690.98 00:28:23.707 clat (usec): min=5617, max=58205, avg=14491.77, stdev=14530.06 00:28:23.707 lat (usec): min=5624, max=58212, avg=14500.49, stdev=14530.19 00:28:23.707 clat percentiles (usec): 00:28:23.707 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 6587], 20.00th=[ 7242], 00:28:23.707 | 30.00th=[ 7767], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9372], 00:28:23.707 | 70.00th=[10290], 80.00th=[12125], 90.00th=[49546], 95.00th=[51643], 00:28:23.707 | 99.00th=[54264], 99.50th=[55313], 99.90th=[57934], 99.95th=[58459], 00:28:23.707 | 99.99th=[58459] 00:28:23.707 bw ( KiB/s): min=16128, max=33280, per=35.41%, avg=26496.00, stdev=5605.11, samples=10 00:28:23.707 iops : min= 126, max= 260, avg=207.00, stdev=43.79, samples=10 00:28:23.707 lat (msec) : 10=67.34%, 20=19.27%, 50=5.01%, 100=8.38% 00:28:23.707 cpu : usr=95.46%, sys=3.89%, ctx=9, majf=0, minf=96 00:28:23.707 IO depths : 1=3.7%, 2=96.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:23.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.707 issued rwts: total=1038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.707 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:23.707 filename0: (groupid=0, jobs=1): err= 0: pid=2225838: Wed Jul 24 20:04:14 2024 00:28:23.707 read: IOPS=134, BW=16.8MiB/s (17.6MB/s)(84.0MiB/5004msec) 00:28:23.707 slat (nsec): min=6190, max=25897, avg=8756.92, stdev=2819.08 00:28:23.707 clat (usec): min=5709, max=78623, avg=22319.90, stdev=18074.31 00:28:23.707 lat (usec): min=5716, max=78649, avg=22328.66, stdev=18074.52 00:28:23.707 clat percentiles (usec): 00:28:23.707 | 1.00th=[ 5997], 5.00th=[ 6587], 10.00th=[ 7439], 20.00th=[ 9110], 00:28:23.707 | 30.00th=[11338], 40.00th=[13960], 50.00th=[15401], 60.00th=[17433], 00:28:23.707 | 70.00th=[21103], 80.00th=[26870], 90.00th=[57934], 95.00th=[60556], 00:28:23.707 | 99.00th=[65274], 99.50th=[65799], 99.90th=[78119], 99.95th=[78119], 00:28:23.707 | 99.99th=[78119] 00:28:23.707 bw ( KiB/s): min=13824, max=19968, per=22.89%, avg=17129.40, stdev=2071.50, samples=10 00:28:23.707 iops : min= 108, max= 156, avg=133.80, stdev=16.21, samples=10 00:28:23.707 lat (msec) : 10=25.30%, 20=42.41%, 50=14.43%, 100=17.86% 00:28:23.707 cpu : usr=95.98%, sys=3.42%, ctx=9, majf=0, minf=118 00:28:23.707 IO depths : 1=9.4%, 2=90.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:23.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.707 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.707 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:23.707 00:28:23.707 Run status group 0 (all jobs): 00:28:23.707 READ: bw=73.1MiB/s (76.6MB/s), 16.8MiB/s-30.6MiB/s (17.6MB/s-32.1MB/s), io=367MiB (385MB), run=5002-5019msec 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:23.707 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.708 bdev_null0 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.708 [2024-07-24 20:04:14.263202] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.708 bdev_null1 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.708 bdev_null2 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.708 { 00:28:23.708 "params": { 00:28:23.708 "name": "Nvme$subsystem", 00:28:23.708 "trtype": "$TEST_TRANSPORT", 00:28:23.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.708 "adrfam": "ipv4", 00:28:23.708 "trsvcid": "$NVMF_PORT", 00:28:23.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.708 "hdgst": ${hdgst:-false}, 00:28:23.708 "ddgst": ${ddgst:-false} 00:28:23.708 }, 00:28:23.708 "method": "bdev_nvme_attach_controller" 00:28:23.708 } 00:28:23.708 EOF 00:28:23.708 )") 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.708 { 00:28:23.708 "params": { 00:28:23.708 "name": "Nvme$subsystem", 00:28:23.708 "trtype": "$TEST_TRANSPORT", 00:28:23.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.708 "adrfam": "ipv4", 00:28:23.708 "trsvcid": "$NVMF_PORT", 00:28:23.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.708 "hdgst": ${hdgst:-false}, 00:28:23.708 "ddgst": ${ddgst:-false} 00:28:23.708 }, 00:28:23.708 "method": "bdev_nvme_attach_controller" 00:28:23.708 } 00:28:23.708 EOF 00:28:23.708 )") 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.708 20:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.708 { 00:28:23.708 "params": { 00:28:23.708 "name": "Nvme$subsystem", 00:28:23.708 "trtype": "$TEST_TRANSPORT", 00:28:23.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.708 "adrfam": "ipv4", 00:28:23.708 "trsvcid": "$NVMF_PORT", 00:28:23.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.708 "hdgst": ${hdgst:-false}, 00:28:23.708 "ddgst": ${ddgst:-false} 00:28:23.708 }, 00:28:23.709 "method": "bdev_nvme_attach_controller" 00:28:23.709 } 00:28:23.709 EOF 00:28:23.709 )") 00:28:23.709 20:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:23.709 20:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:23.709 20:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:23.709 20:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:23.709 "params": { 00:28:23.709 "name": "Nvme0", 00:28:23.709 "trtype": "tcp", 00:28:23.709 "traddr": "10.0.0.2", 00:28:23.709 "adrfam": "ipv4", 00:28:23.709 "trsvcid": "4420", 00:28:23.709 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:23.709 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:23.709 "hdgst": false, 00:28:23.709 "ddgst": false 00:28:23.709 }, 00:28:23.709 "method": "bdev_nvme_attach_controller" 00:28:23.709 },{ 00:28:23.709 "params": { 00:28:23.709 "name": "Nvme1", 00:28:23.709 "trtype": "tcp", 00:28:23.709 "traddr": "10.0.0.2", 00:28:23.709 "adrfam": "ipv4", 00:28:23.709 "trsvcid": "4420", 00:28:23.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:23.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:23.709 "hdgst": false, 00:28:23.709 "ddgst": false 00:28:23.709 }, 00:28:23.709 "method": "bdev_nvme_attach_controller" 00:28:23.709 },{ 00:28:23.709 "params": { 00:28:23.709 "name": "Nvme2", 00:28:23.709 "trtype": "tcp", 00:28:23.709 "traddr": "10.0.0.2", 00:28:23.709 "adrfam": "ipv4", 00:28:23.709 "trsvcid": "4420", 00:28:23.709 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:23.709 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:23.709 "hdgst": false, 00:28:23.709 "ddgst": false 00:28:23.709 }, 00:28:23.709 "method": "bdev_nvme_attach_controller" 00:28:23.709 }' 00:28:23.709 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:23.709 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:23.709 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:23.709 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:23.709 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:23.709 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:23.709 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:23.709 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:23.709 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:23.709 20:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:23.709 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:23.709 ... 00:28:23.709 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:23.709 ... 00:28:23.709 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:23.709 ... 00:28:23.709 fio-3.35 00:28:23.709 Starting 24 threads 00:28:23.709 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.906 00:28:35.906 filename0: (groupid=0, jobs=1): err= 0: pid=2227015: Wed Jul 24 20:04:25 2024 00:28:35.906 read: IOPS=607, BW=2428KiB/s (2486kB/s)(23.8MiB/10023msec) 00:28:35.906 slat (nsec): min=3216, max=69095, avg=13639.98, stdev=7305.18 00:28:35.906 clat (usec): min=11055, max=51815, avg=26283.29, stdev=3077.27 00:28:35.906 lat (usec): min=11064, max=51824, avg=26296.93, stdev=3077.75 00:28:35.906 clat percentiles (usec): 00:28:35.906 | 1.00th=[15795], 5.00th=[23987], 10.00th=[24773], 20.00th=[25297], 00:28:35.906 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:28:35.906 | 70.00th=[26608], 80.00th=[27132], 90.00th=[27657], 95.00th=[30802], 00:28:35.906 | 99.00th=[38536], 99.50th=[41681], 99.90th=[44303], 99.95th=[51643], 00:28:35.906 | 99.99th=[51643] 00:28:35.906 bw ( KiB/s): min= 2336, max= 2512, per=4.48%, avg=2427.20, stdev=51.68, samples=20 00:28:35.906 iops : min= 584, max= 628, avg=606.80, stdev=12.92, samples=20 00:28:35.906 lat (msec) : 20=2.81%, 50=97.09%, 100=0.10% 00:28:35.906 cpu : usr=98.71%, sys=0.87%, ctx=15, majf=0, minf=53 00:28:35.906 IO depths : 1=0.3%, 2=0.6%, 4=6.0%, 8=79.3%, 16=13.9%, 32=0.0%, >=64=0.0% 00:28:35.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.906 complete : 0=0.0%, 4=89.5%, 8=6.5%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.906 issued rwts: total=6084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.906 filename0: (groupid=0, jobs=1): err= 0: pid=2227016: Wed Jul 24 20:04:25 2024 00:28:35.906 read: IOPS=580, BW=2320KiB/s (2376kB/s)(22.7MiB/10007msec) 00:28:35.906 slat (nsec): min=4242, max=66050, avg=15902.67, stdev=8682.79 00:28:35.906 clat (usec): min=9500, max=50081, avg=27492.46, stdev=5084.20 00:28:35.906 lat (usec): min=9504, max=50093, avg=27508.36, stdev=5084.83 00:28:35.906 clat percentiles (usec): 00:28:35.906 | 1.00th=[14353], 5.00th=[19006], 10.00th=[23987], 20.00th=[25297], 00:28:35.906 | 30.00th=[25560], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:28:35.906 | 70.00th=[27395], 80.00th=[31851], 90.00th=[34866], 95.00th=[36963], 00:28:35.906 | 99.00th=[43254], 99.50th=[45351], 99.90th=[49546], 99.95th=[50070], 00:28:35.906 | 99.99th=[50070] 00:28:35.906 bw ( KiB/s): min= 2176, max= 2560, per=4.27%, avg=2315.60, stdev=101.77, samples=20 00:28:35.906 iops : min= 544, max= 640, avg=578.90, stdev=25.44, samples=20 00:28:35.906 lat (msec) : 10=0.19%, 20=6.15%, 50=93.63%, 100=0.03% 00:28:35.906 cpu : usr=98.46%, sys=1.10%, ctx=17, majf=0, minf=31 00:28:35.906 IO depths : 1=0.8%, 2=1.7%, 4=8.4%, 8=76.3%, 16=12.9%, 32=0.0%, >=64=0.0% 00:28:35.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.906 complete : 0=0.0%, 4=90.1%, 8=5.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.906 issued rwts: total=5805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.906 filename0: (groupid=0, jobs=1): err= 0: pid=2227017: Wed Jul 24 20:04:25 2024 00:28:35.906 read: IOPS=536, BW=2146KiB/s (2197kB/s)(21.0MiB/10004msec) 00:28:35.906 slat (nsec): min=6842, max=99140, avg=27143.28, stdev=18849.56 00:28:35.906 clat (usec): min=3433, max=50246, avg=29649.77, stdev=5799.74 00:28:35.906 lat (usec): min=3441, max=50260, avg=29676.92, stdev=5802.68 00:28:35.906 clat percentiles (usec): 00:28:35.906 | 1.00th=[15139], 5.00th=[20055], 10.00th=[24511], 20.00th=[25560], 00:28:35.906 | 30.00th=[26084], 40.00th=[26608], 50.00th=[27657], 60.00th=[32375], 00:28:35.906 | 70.00th=[33817], 80.00th=[34866], 90.00th=[36439], 95.00th=[38011], 00:28:35.906 | 99.00th=[42730], 99.50th=[45351], 99.90th=[50070], 99.95th=[50070], 00:28:35.906 | 99.99th=[50070] 00:28:35.906 bw ( KiB/s): min= 1792, max= 2432, per=3.96%, avg=2146.53, stdev=168.30, samples=19 00:28:35.906 iops : min= 448, max= 608, avg=536.63, stdev=42.07, samples=19 00:28:35.906 lat (msec) : 4=0.11%, 10=0.19%, 20=4.66%, 50=95.02%, 100=0.02% 00:28:35.906 cpu : usr=98.48%, sys=1.08%, ctx=15, majf=0, minf=39 00:28:35.906 IO depths : 1=2.1%, 2=4.2%, 4=13.0%, 8=69.1%, 16=11.5%, 32=0.0%, >=64=0.0% 00:28:35.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.906 complete : 0=0.0%, 4=91.2%, 8=4.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.906 issued rwts: total=5366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.906 filename0: (groupid=0, jobs=1): err= 0: pid=2227018: Wed Jul 24 20:04:25 2024 00:28:35.906 read: IOPS=619, BW=2478KiB/s (2537kB/s)(24.2MiB/10009msec) 00:28:35.906 slat (nsec): min=6730, max=79875, avg=12738.42, stdev=5836.30 00:28:35.906 clat (usec): min=12992, max=46168, avg=25735.55, stdev=3517.46 00:28:35.906 lat (usec): min=13001, max=46183, avg=25748.29, stdev=3518.31 00:28:35.906 clat percentiles (usec): 00:28:35.906 | 1.00th=[15270], 5.00th=[17957], 10.00th=[23725], 20.00th=[25035], 00:28:35.906 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:28:35.906 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27657], 95.00th=[31065], 00:28:35.906 | 99.00th=[37487], 99.50th=[39060], 99.90th=[45351], 99.95th=[45876], 00:28:35.906 | 99.99th=[46400] 00:28:35.907 bw ( KiB/s): min= 2176, max= 2784, per=4.56%, avg=2473.60, stdev=120.79, samples=20 00:28:35.907 iops : min= 544, max= 696, avg=618.40, stdev=30.20, samples=20 00:28:35.907 lat (msec) : 20=7.76%, 50=92.24% 00:28:35.907 cpu : usr=98.62%, sys=0.95%, ctx=19, majf=0, minf=53 00:28:35.907 IO depths : 1=4.2%, 2=8.6%, 4=19.1%, 8=59.2%, 16=8.9%, 32=0.0%, >=64=0.0% 00:28:35.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.907 complete : 0=0.0%, 4=92.8%, 8=1.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.907 issued rwts: total=6200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.907 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.907 filename0: (groupid=0, jobs=1): err= 0: pid=2227020: Wed Jul 24 20:04:25 2024 00:28:35.907 read: IOPS=587, BW=2349KiB/s (2405kB/s)(23.0MiB/10014msec) 00:28:35.907 slat (nsec): min=6869, max=79056, avg=17857.65, stdev=10036.91 00:28:35.907 clat (usec): min=10230, max=48832, avg=27147.77, stdev=4426.22 00:28:35.907 lat (usec): min=10245, max=48841, avg=27165.63, stdev=4426.18 00:28:35.907 clat percentiles (usec): 00:28:35.907 | 1.00th=[15795], 5.00th=[20579], 10.00th=[24249], 20.00th=[25297], 00:28:35.907 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:28:35.907 | 70.00th=[27132], 80.00th=[27919], 90.00th=[33817], 95.00th=[35914], 00:28:35.907 | 99.00th=[41157], 99.50th=[44827], 99.90th=[46924], 99.95th=[49021], 00:28:35.907 | 99.99th=[49021] 00:28:35.907 bw ( KiB/s): min= 2176, max= 2512, per=4.33%, avg=2345.60, stdev=77.11, samples=20 00:28:35.907 iops : min= 544, max= 628, avg=586.40, stdev=19.28, samples=20 00:28:35.907 lat (msec) : 20=4.76%, 50=95.24% 00:28:35.907 cpu : usr=98.52%, sys=1.05%, ctx=18, majf=0, minf=51 00:28:35.907 IO depths : 1=0.9%, 2=1.9%, 4=8.7%, 8=76.0%, 16=12.5%, 32=0.0%, >=64=0.0% 00:28:35.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.907 complete : 0=0.0%, 4=90.2%, 8=4.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.907 issued rwts: total=5880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.907 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.907 filename0: (groupid=0, jobs=1): err= 0: pid=2227021: Wed Jul 24 20:04:25 2024 00:28:35.907 read: IOPS=554, BW=2218KiB/s (2271kB/s)(21.7MiB/10020msec) 00:28:35.907 slat (nsec): min=6849, max=81382, avg=18775.30, stdev=11256.19 00:28:35.907 clat (usec): min=10920, max=52112, avg=28736.00, stdev=5312.07 00:28:35.907 lat (usec): min=10938, max=52129, avg=28754.77, stdev=5311.51 00:28:35.907 clat percentiles (usec): 00:28:35.907 | 1.00th=[16319], 5.00th=[22938], 10.00th=[24773], 20.00th=[25560], 00:28:35.907 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26870], 60.00th=[27395], 00:28:35.907 | 70.00th=[31327], 80.00th=[33817], 90.00th=[35914], 95.00th=[37487], 00:28:35.907 | 99.00th=[46400], 99.50th=[48497], 99.90th=[51643], 99.95th=[52167], 00:28:35.907 | 99.99th=[52167] 00:28:35.907 bw ( KiB/s): min= 2008, max= 2328, per=4.09%, avg=2218.80, stdev=77.10, samples=20 00:28:35.907 iops : min= 502, max= 582, avg=554.70, stdev=19.27, samples=20 00:28:35.907 lat (msec) : 20=3.47%, 50=96.15%, 100=0.38% 00:28:35.907 cpu : usr=98.57%, sys=0.99%, ctx=20, majf=0, minf=56 00:28:35.907 IO depths : 1=0.2%, 2=0.6%, 4=7.6%, 8=77.8%, 16=13.9%, 32=0.0%, >=64=0.0% 00:28:35.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.907 complete : 0=0.0%, 4=90.2%, 8=5.5%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.907 issued rwts: total=5556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.907 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.907 filename0: (groupid=0, jobs=1): err= 0: pid=2227022: Wed Jul 24 20:04:25 2024 00:28:35.907 read: IOPS=577, BW=2312KiB/s (2367kB/s)(22.6MiB/10020msec) 00:28:35.907 slat (nsec): min=6761, max=74405, avg=18119.75, stdev=11360.09 00:28:35.907 clat (usec): min=13136, max=46016, avg=27570.23, stdev=4751.59 00:28:35.907 lat (usec): min=13144, max=46026, avg=27588.35, stdev=4751.70 00:28:35.907 clat percentiles (usec): 00:28:35.907 | 1.00th=[16319], 5.00th=[20317], 10.00th=[24249], 20.00th=[25297], 00:28:35.907 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26346], 60.00th=[26870], 00:28:35.907 | 70.00th=[27395], 80.00th=[31851], 90.00th=[34866], 95.00th=[36439], 00:28:35.907 | 99.00th=[42206], 99.50th=[43254], 99.90th=[45351], 99.95th=[45876], 00:28:35.907 | 99.99th=[45876] 00:28:35.907 bw ( KiB/s): min= 2224, max= 2408, per=4.26%, avg=2310.00, stdev=54.06, samples=20 00:28:35.907 iops : min= 556, max= 602, avg=577.50, stdev=13.52, samples=20 00:28:35.907 lat (msec) : 20=4.28%, 50=95.72% 00:28:35.907 cpu : usr=98.48%, sys=1.09%, ctx=17, majf=0, minf=46 00:28:35.907 IO depths : 1=0.5%, 2=1.0%, 4=8.2%, 8=77.2%, 16=13.1%, 32=0.0%, >=64=0.0% 00:28:35.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.907 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.907 issued rwts: total=5791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.907 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.907 filename0: (groupid=0, jobs=1): err= 0: pid=2227023: Wed Jul 24 20:04:25 2024 00:28:35.907 read: IOPS=539, BW=2159KiB/s (2211kB/s)(21.1MiB/10010msec) 00:28:35.907 slat (nsec): min=6822, max=76486, avg=17306.08, stdev=10299.11 00:28:35.907 clat (usec): min=12143, max=51523, avg=29531.40, stdev=5820.49 00:28:35.907 lat (usec): min=12151, max=51532, avg=29548.71, stdev=5819.45 00:28:35.907 clat percentiles (usec): 00:28:35.907 | 1.00th=[15795], 5.00th=[22938], 10.00th=[24773], 20.00th=[25560], 00:28:35.907 | 30.00th=[26084], 40.00th=[26608], 50.00th=[27132], 60.00th=[28705], 00:28:35.907 | 70.00th=[33162], 80.00th=[34866], 90.00th=[36963], 95.00th=[39060], 00:28:35.907 | 99.00th=[47973], 99.50th=[49546], 99.90th=[51119], 99.95th=[51643], 00:28:35.907 | 99.99th=[51643] 00:28:35.907 bw ( KiB/s): min= 2000, max= 2336, per=3.98%, avg=2157.47, stdev=97.40, samples=19 00:28:35.907 iops : min= 500, max= 584, avg=539.37, stdev=24.35, samples=19 00:28:35.907 lat (msec) : 20=4.03%, 50=95.48%, 100=0.48% 00:28:35.907 cpu : usr=98.45%, sys=1.13%, ctx=17, majf=0, minf=42 00:28:35.907 IO depths : 1=0.6%, 2=1.4%, 4=8.7%, 8=76.4%, 16=12.8%, 32=0.0%, >=64=0.0% 00:28:35.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.907 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.907 issued rwts: total=5404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.907 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.907 filename1: (groupid=0, jobs=1): err= 0: pid=2227024: Wed Jul 24 20:04:25 2024 00:28:35.907 read: IOPS=549, BW=2197KiB/s (2250kB/s)(21.5MiB/10010msec) 00:28:35.907 slat (nsec): min=6828, max=80461, avg=17318.01, stdev=10042.32 00:28:35.907 clat (usec): min=12106, max=52657, avg=29030.34, stdev=5474.76 00:28:35.907 lat (usec): min=12128, max=52675, avg=29047.65, stdev=5473.91 00:28:35.907 clat percentiles (usec): 00:28:35.907 | 1.00th=[16712], 5.00th=[23725], 10.00th=[25035], 20.00th=[25560], 00:28:35.907 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26870], 60.00th=[27657], 00:28:35.907 | 70.00th=[31589], 80.00th=[34341], 90.00th=[36439], 95.00th=[38536], 00:28:35.907 | 99.00th=[45351], 99.50th=[50594], 99.90th=[52167], 99.95th=[52691], 00:28:35.907 | 99.99th=[52691] 00:28:35.907 bw ( KiB/s): min= 2072, max= 2304, per=4.03%, avg=2184.42, stdev=74.31, samples=19 00:28:35.907 iops : min= 518, max= 576, avg=546.11, stdev=18.58, samples=19 00:28:35.907 lat (msec) : 20=3.18%, 50=96.27%, 100=0.55% 00:28:35.907 cpu : usr=98.45%, sys=1.12%, ctx=21, majf=0, minf=49 00:28:35.907 IO depths : 1=0.3%, 2=0.7%, 4=7.5%, 8=78.9%, 16=12.7%, 32=0.0%, >=64=0.0% 00:28:35.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.907 complete : 0=0.0%, 4=89.7%, 8=5.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.907 issued rwts: total=5499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.907 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.907 filename1: (groupid=0, jobs=1): err= 0: pid=2227025: Wed Jul 24 20:04:25 2024 00:28:35.907 read: IOPS=544, BW=2179KiB/s (2232kB/s)(21.3MiB/10003msec) 00:28:35.907 slat (nsec): min=6788, max=86379, avg=16503.70, stdev=10504.67 00:28:35.907 clat (usec): min=5546, max=52222, avg=29271.31, stdev=5806.50 00:28:35.907 lat (usec): min=5561, max=52230, avg=29287.81, stdev=5805.36 00:28:35.907 clat percentiles (usec): 00:28:35.907 | 1.00th=[15139], 5.00th=[20317], 10.00th=[24773], 20.00th=[25560], 00:28:35.907 | 30.00th=[26084], 40.00th=[26608], 50.00th=[27132], 60.00th=[28705], 00:28:35.907 | 70.00th=[32900], 80.00th=[34341], 90.00th=[36439], 95.00th=[38536], 00:28:35.907 | 99.00th=[45351], 99.50th=[47449], 99.90th=[51643], 99.95th=[52167], 00:28:35.907 | 99.99th=[52167] 00:28:35.907 bw ( KiB/s): min= 1968, max= 2352, per=4.01%, avg=2175.16, stdev=102.52, samples=19 00:28:35.907 iops : min= 492, max= 588, avg=543.79, stdev=25.63, samples=19 00:28:35.907 lat (msec) : 10=0.29%, 20=4.39%, 50=95.10%, 100=0.22% 00:28:35.907 cpu : usr=98.39%, sys=1.19%, ctx=21, majf=0, minf=43 00:28:35.907 IO depths : 1=0.9%, 2=1.9%, 4=9.9%, 8=75.2%, 16=12.1%, 32=0.0%, >=64=0.0% 00:28:35.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.907 complete : 0=0.0%, 4=90.3%, 8=4.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.907 issued rwts: total=5450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.907 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.907 filename1: (groupid=0, jobs=1): err= 0: pid=2227026: Wed Jul 24 20:04:25 2024 00:28:35.907 read: IOPS=575, BW=2302KiB/s (2358kB/s)(22.5MiB/10005msec) 00:28:35.907 slat (nsec): min=5827, max=81565, avg=19102.17, stdev=10956.03 00:28:35.907 clat (usec): min=11750, max=49521, avg=27656.42, stdev=4761.14 00:28:35.907 lat (usec): min=11774, max=49544, avg=27675.52, stdev=4760.93 00:28:35.907 clat percentiles (usec): 00:28:35.907 | 1.00th=[16319], 5.00th=[20317], 10.00th=[24511], 20.00th=[25297], 00:28:35.907 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26870], 00:28:35.907 | 70.00th=[27395], 80.00th=[32113], 90.00th=[34866], 95.00th=[36439], 00:28:35.907 | 99.00th=[42730], 99.50th=[44827], 99.90th=[47973], 99.95th=[49546], 00:28:35.907 | 99.99th=[49546] 00:28:35.907 bw ( KiB/s): min= 2000, max= 2432, per=4.21%, avg=2283.37, stdev=122.30, samples=19 00:28:35.907 iops : min= 500, max= 608, avg=570.84, stdev=30.57, samples=19 00:28:35.907 lat (msec) : 20=4.62%, 50=95.38% 00:28:35.907 cpu : usr=98.54%, sys=1.04%, ctx=18, majf=0, minf=46 00:28:35.908 IO depths : 1=2.3%, 2=6.0%, 4=17.3%, 8=63.9%, 16=10.5%, 32=0.0%, >=64=0.0% 00:28:35.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.908 complete : 0=0.0%, 4=92.2%, 8=2.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.908 issued rwts: total=5759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.908 filename1: (groupid=0, jobs=1): err= 0: pid=2227027: Wed Jul 24 20:04:25 2024 00:28:35.908 read: IOPS=589, BW=2359KiB/s (2416kB/s)(23.0MiB/10003msec) 00:28:35.908 slat (nsec): min=6840, max=74180, avg=15436.68, stdev=10157.26 00:28:35.908 clat (usec): min=5353, max=60227, avg=27052.83, stdev=4002.92 00:28:35.908 lat (usec): min=5368, max=60245, avg=27068.27, stdev=4002.33 00:28:35.908 clat percentiles (usec): 00:28:35.908 | 1.00th=[17433], 5.00th=[24249], 10.00th=[24773], 20.00th=[25297], 00:28:35.908 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:28:35.908 | 70.00th=[26870], 80.00th=[27395], 90.00th=[32113], 95.00th=[34866], 00:28:35.908 | 99.00th=[40109], 99.50th=[44303], 99.90th=[60031], 99.95th=[60031], 00:28:35.908 | 99.99th=[60031] 00:28:35.908 bw ( KiB/s): min= 2048, max= 2512, per=4.34%, avg=2350.74, stdev=119.59, samples=19 00:28:35.908 iops : min= 512, max= 628, avg=587.68, stdev=29.90, samples=19 00:28:35.908 lat (msec) : 10=0.12%, 20=1.73%, 50=97.85%, 100=0.31% 00:28:35.908 cpu : usr=98.62%, sys=0.96%, ctx=13, majf=0, minf=32 00:28:35.908 IO depths : 1=0.1%, 2=0.3%, 4=4.6%, 8=80.0%, 16=14.9%, 32=0.0%, >=64=0.0% 00:28:35.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.908 complete : 0=0.0%, 4=89.5%, 8=7.3%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.908 issued rwts: total=5900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.908 filename1: (groupid=0, jobs=1): err= 0: pid=2227028: Wed Jul 24 20:04:25 2024 00:28:35.908 read: IOPS=586, BW=2346KiB/s (2403kB/s)(23.0MiB/10027msec) 00:28:35.908 slat (nsec): min=6850, max=72000, avg=18309.50, stdev=10962.97 00:28:35.908 clat (usec): min=13981, max=47867, avg=27153.42, stdev=4182.52 00:28:35.908 lat (usec): min=13996, max=47890, avg=27171.73, stdev=4182.82 00:28:35.908 clat percentiles (usec): 00:28:35.908 | 1.00th=[16319], 5.00th=[20841], 10.00th=[24511], 20.00th=[25297], 00:28:35.908 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:28:35.908 | 70.00th=[27132], 80.00th=[28443], 90.00th=[33817], 95.00th=[35390], 00:28:35.908 | 99.00th=[38536], 99.50th=[41157], 99.90th=[45876], 99.95th=[47973], 00:28:35.908 | 99.99th=[47973] 00:28:35.908 bw ( KiB/s): min= 2176, max= 2512, per=4.33%, avg=2346.40, stdev=96.46, samples=20 00:28:35.908 iops : min= 544, max= 628, avg=586.60, stdev=24.11, samples=20 00:28:35.908 lat (msec) : 20=4.20%, 50=95.80% 00:28:35.908 cpu : usr=98.50%, sys=1.07%, ctx=16, majf=0, minf=39 00:28:35.908 IO depths : 1=1.1%, 2=2.1%, 4=9.3%, 8=75.4%, 16=12.1%, 32=0.0%, >=64=0.0% 00:28:35.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.908 complete : 0=0.0%, 4=90.0%, 8=5.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.908 issued rwts: total=5882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.908 filename1: (groupid=0, jobs=1): err= 0: pid=2227029: Wed Jul 24 20:04:25 2024 00:28:35.908 read: IOPS=558, BW=2235KiB/s (2289kB/s)(21.9MiB/10020msec) 00:28:35.908 slat (nsec): min=6788, max=81443, avg=17208.89, stdev=9793.64 00:28:35.908 clat (usec): min=12833, max=50915, avg=28521.66, stdev=5113.40 00:28:35.908 lat (usec): min=12853, max=50937, avg=28538.87, stdev=5112.92 00:28:35.908 clat percentiles (usec): 00:28:35.908 | 1.00th=[16450], 5.00th=[23200], 10.00th=[24773], 20.00th=[25560], 00:28:35.908 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[27132], 00:28:35.908 | 70.00th=[30540], 80.00th=[33817], 90.00th=[35914], 95.00th=[37487], 00:28:35.908 | 99.00th=[42730], 99.50th=[44303], 99.90th=[50594], 99.95th=[51119], 00:28:35.908 | 99.99th=[51119] 00:28:35.908 bw ( KiB/s): min= 2048, max= 2336, per=4.12%, avg=2233.20, stdev=81.35, samples=20 00:28:35.908 iops : min= 512, max= 584, avg=558.30, stdev=20.34, samples=20 00:28:35.908 lat (msec) : 20=3.73%, 50=96.11%, 100=0.16% 00:28:35.908 cpu : usr=98.52%, sys=1.06%, ctx=19, majf=0, minf=41 00:28:35.908 IO depths : 1=0.5%, 2=1.0%, 4=7.8%, 8=77.9%, 16=12.8%, 32=0.0%, >=64=0.0% 00:28:35.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.908 complete : 0=0.0%, 4=89.9%, 8=5.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.908 issued rwts: total=5599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.908 filename1: (groupid=0, jobs=1): err= 0: pid=2227030: Wed Jul 24 20:04:25 2024 00:28:35.908 read: IOPS=551, BW=2205KiB/s (2258kB/s)(21.6MiB/10014msec) 00:28:35.908 slat (nsec): min=6867, max=68391, avg=17836.61, stdev=9926.31 00:28:35.908 clat (usec): min=12880, max=53310, avg=28898.88, stdev=5377.33 00:28:35.908 lat (usec): min=12895, max=53317, avg=28916.71, stdev=5377.07 00:28:35.908 clat percentiles (usec): 00:28:35.908 | 1.00th=[16057], 5.00th=[22152], 10.00th=[24773], 20.00th=[25560], 00:28:35.908 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26870], 60.00th=[27657], 00:28:35.908 | 70.00th=[32375], 80.00th=[34341], 90.00th=[35914], 95.00th=[38011], 00:28:35.908 | 99.00th=[43779], 99.50th=[45876], 99.90th=[50070], 99.95th=[53216], 00:28:35.908 | 99.99th=[53216] 00:28:35.908 bw ( KiB/s): min= 2016, max= 2384, per=4.07%, avg=2206.00, stdev=108.05, samples=20 00:28:35.908 iops : min= 504, max= 596, avg=551.50, stdev=27.01, samples=20 00:28:35.908 lat (msec) : 20=3.97%, 50=95.87%, 100=0.16% 00:28:35.908 cpu : usr=98.39%, sys=1.18%, ctx=14, majf=0, minf=37 00:28:35.908 IO depths : 1=1.0%, 2=2.1%, 4=10.0%, 8=74.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:28:35.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.908 complete : 0=0.0%, 4=90.2%, 8=4.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.908 issued rwts: total=5521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.908 filename1: (groupid=0, jobs=1): err= 0: pid=2227031: Wed Jul 24 20:04:25 2024 00:28:35.908 read: IOPS=536, BW=2144KiB/s (2196kB/s)(21.0MiB/10009msec) 00:28:35.908 slat (nsec): min=6806, max=81241, avg=18213.85, stdev=10597.92 00:28:35.908 clat (usec): min=11274, max=53430, avg=29732.03, stdev=5807.79 00:28:35.908 lat (usec): min=11289, max=53442, avg=29750.24, stdev=5807.23 00:28:35.908 clat percentiles (usec): 00:28:35.908 | 1.00th=[16712], 5.00th=[23987], 10.00th=[25035], 20.00th=[25560], 00:28:35.908 | 30.00th=[26346], 40.00th=[26608], 50.00th=[27132], 60.00th=[30540], 00:28:35.908 | 70.00th=[33162], 80.00th=[34866], 90.00th=[36439], 95.00th=[39060], 00:28:35.908 | 99.00th=[49546], 99.50th=[51643], 99.90th=[52691], 99.95th=[53216], 00:28:35.908 | 99.99th=[53216] 00:28:35.908 bw ( KiB/s): min= 1920, max= 2416, per=3.95%, avg=2139.79, stdev=119.89, samples=19 00:28:35.908 iops : min= 480, max= 604, avg=534.95, stdev=29.97, samples=19 00:28:35.908 lat (msec) : 20=3.13%, 50=95.99%, 100=0.88% 00:28:35.908 cpu : usr=98.26%, sys=1.32%, ctx=21, majf=0, minf=48 00:28:35.908 IO depths : 1=0.6%, 2=1.4%, 4=8.6%, 8=76.7%, 16=12.7%, 32=0.0%, >=64=0.0% 00:28:35.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.908 complete : 0=0.0%, 4=90.1%, 8=5.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.908 issued rwts: total=5366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.908 filename2: (groupid=0, jobs=1): err= 0: pid=2227033: Wed Jul 24 20:04:25 2024 00:28:35.908 read: IOPS=544, BW=2177KiB/s (2229kB/s)(21.3MiB/10011msec) 00:28:35.908 slat (nsec): min=6802, max=81277, avg=17605.14, stdev=10539.60 00:28:35.908 clat (usec): min=11241, max=57947, avg=29290.31, stdev=5912.94 00:28:35.908 lat (usec): min=11253, max=57966, avg=29307.91, stdev=5912.64 00:28:35.908 clat percentiles (usec): 00:28:35.908 | 1.00th=[15139], 5.00th=[22414], 10.00th=[24511], 20.00th=[25560], 00:28:35.908 | 30.00th=[26084], 40.00th=[26346], 50.00th=[27132], 60.00th=[28181], 00:28:35.908 | 70.00th=[32900], 80.00th=[34866], 90.00th=[36963], 95.00th=[38536], 00:28:35.908 | 99.00th=[46924], 99.50th=[51119], 99.90th=[52167], 99.95th=[57934], 00:28:35.908 | 99.99th=[57934] 00:28:35.908 bw ( KiB/s): min= 1976, max= 2352, per=4.00%, avg=2171.37, stdev=112.26, samples=19 00:28:35.908 iops : min= 494, max= 588, avg=542.84, stdev=28.07, samples=19 00:28:35.908 lat (msec) : 20=4.28%, 50=95.08%, 100=0.64% 00:28:35.908 cpu : usr=98.56%, sys=1.02%, ctx=20, majf=0, minf=33 00:28:35.908 IO depths : 1=0.3%, 2=0.8%, 4=7.7%, 8=77.7%, 16=13.4%, 32=0.0%, >=64=0.0% 00:28:35.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.908 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.908 issued rwts: total=5449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.908 filename2: (groupid=0, jobs=1): err= 0: pid=2227034: Wed Jul 24 20:04:25 2024 00:28:35.908 read: IOPS=558, BW=2233KiB/s (2286kB/s)(21.8MiB/10008msec) 00:28:35.908 slat (nsec): min=6821, max=80639, avg=17729.64, stdev=10544.75 00:28:35.908 clat (usec): min=12299, max=51777, avg=28563.98, stdev=5398.96 00:28:35.908 lat (usec): min=12313, max=51793, avg=28581.71, stdev=5399.09 00:28:35.908 clat percentiles (usec): 00:28:35.908 | 1.00th=[15533], 5.00th=[20055], 10.00th=[24249], 20.00th=[25297], 00:28:35.908 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[27395], 00:28:35.908 | 70.00th=[31589], 80.00th=[34341], 90.00th=[35914], 95.00th=[37487], 00:28:35.908 | 99.00th=[42730], 99.50th=[46400], 99.90th=[51119], 99.95th=[51643], 00:28:35.908 | 99.99th=[51643] 00:28:35.908 bw ( KiB/s): min= 2048, max= 2352, per=4.10%, avg=2225.89, stdev=91.39, samples=19 00:28:35.908 iops : min= 512, max= 588, avg=556.47, stdev=22.85, samples=19 00:28:35.908 lat (msec) : 20=4.85%, 50=95.01%, 100=0.14% 00:28:35.908 cpu : usr=98.67%, sys=0.92%, ctx=14, majf=0, minf=39 00:28:35.908 IO depths : 1=0.9%, 2=1.8%, 4=9.4%, 8=75.0%, 16=13.0%, 32=0.0%, >=64=0.0% 00:28:35.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.908 complete : 0=0.0%, 4=90.4%, 8=5.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.908 issued rwts: total=5586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.908 filename2: (groupid=0, jobs=1): err= 0: pid=2227035: Wed Jul 24 20:04:25 2024 00:28:35.909 read: IOPS=578, BW=2314KiB/s (2370kB/s)(22.6MiB/10012msec) 00:28:35.909 slat (nsec): min=6590, max=84087, avg=24578.72, stdev=11552.71 00:28:35.909 clat (usec): min=11621, max=51352, avg=27510.51, stdev=4539.93 00:28:35.909 lat (usec): min=11630, max=51395, avg=27535.09, stdev=4540.49 00:28:35.909 clat percentiles (usec): 00:28:35.909 | 1.00th=[16712], 5.00th=[21103], 10.00th=[24511], 20.00th=[25297], 00:28:35.909 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26346], 60.00th=[26870], 00:28:35.909 | 70.00th=[27395], 80.00th=[31065], 90.00th=[34341], 95.00th=[35914], 00:28:35.909 | 99.00th=[41157], 99.50th=[43779], 99.90th=[50070], 99.95th=[51119], 00:28:35.909 | 99.99th=[51119] 00:28:35.909 bw ( KiB/s): min= 2048, max= 2496, per=4.27%, avg=2313.20, stdev=90.97, samples=20 00:28:35.909 iops : min= 512, max= 624, avg=578.30, stdev=22.74, samples=20 00:28:35.909 lat (msec) : 20=4.00%, 50=95.84%, 100=0.16% 00:28:35.909 cpu : usr=98.09%, sys=1.14%, ctx=354, majf=0, minf=37 00:28:35.909 IO depths : 1=0.6%, 2=1.2%, 4=8.2%, 8=77.0%, 16=13.0%, 32=0.0%, >=64=0.0% 00:28:35.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.909 complete : 0=0.0%, 4=90.1%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.909 issued rwts: total=5793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.909 filename2: (groupid=0, jobs=1): err= 0: pid=2227036: Wed Jul 24 20:04:25 2024 00:28:35.909 read: IOPS=547, BW=2191KiB/s (2243kB/s)(21.4MiB/10020msec) 00:28:35.909 slat (usec): min=6, max=214, avg=17.31, stdev=10.83 00:28:35.909 clat (usec): min=12668, max=53403, avg=29109.03, stdev=5406.10 00:28:35.909 lat (usec): min=12677, max=53412, avg=29126.33, stdev=5404.81 00:28:35.909 clat percentiles (usec): 00:28:35.909 | 1.00th=[16909], 5.00th=[23725], 10.00th=[24773], 20.00th=[25560], 00:28:35.909 | 30.00th=[26084], 40.00th=[26608], 50.00th=[26870], 60.00th=[27919], 00:28:35.909 | 70.00th=[31851], 80.00th=[34341], 90.00th=[36439], 95.00th=[38011], 00:28:35.909 | 99.00th=[44827], 99.50th=[48497], 99.90th=[53216], 99.95th=[53216], 00:28:35.909 | 99.99th=[53216] 00:28:35.909 bw ( KiB/s): min= 1888, max= 2416, per=4.04%, avg=2190.40, stdev=129.56, samples=20 00:28:35.909 iops : min= 472, max= 604, avg=547.60, stdev=32.39, samples=20 00:28:35.909 lat (msec) : 20=3.39%, 50=96.19%, 100=0.42% 00:28:35.909 cpu : usr=98.46%, sys=1.05%, ctx=14, majf=0, minf=43 00:28:35.909 IO depths : 1=0.3%, 2=1.0%, 4=7.8%, 8=76.8%, 16=14.0%, 32=0.0%, >=64=0.0% 00:28:35.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.909 complete : 0=0.0%, 4=90.8%, 8=4.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.909 issued rwts: total=5488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.909 filename2: (groupid=0, jobs=1): err= 0: pid=2227037: Wed Jul 24 20:04:25 2024 00:28:35.909 read: IOPS=557, BW=2231KiB/s (2285kB/s)(21.8MiB/10012msec) 00:28:35.909 slat (nsec): min=6846, max=83733, avg=18113.57, stdev=10466.05 00:28:35.909 clat (usec): min=10375, max=57137, avg=28562.80, stdev=5669.23 00:28:35.909 lat (usec): min=10388, max=57153, avg=28580.91, stdev=5668.42 00:28:35.909 clat percentiles (usec): 00:28:35.909 | 1.00th=[15533], 5.00th=[19006], 10.00th=[24249], 20.00th=[25297], 00:28:35.909 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[27395], 00:28:35.909 | 70.00th=[31589], 80.00th=[33817], 90.00th=[35914], 95.00th=[38011], 00:28:35.909 | 99.00th=[44827], 99.50th=[46400], 99.90th=[50594], 99.95th=[50594], 00:28:35.909 | 99.99th=[56886] 00:28:35.909 bw ( KiB/s): min= 2000, max= 2384, per=4.12%, avg=2231.60, stdev=101.61, samples=20 00:28:35.909 iops : min= 500, max= 596, avg=557.90, stdev=25.40, samples=20 00:28:35.909 lat (msec) : 20=6.27%, 50=93.63%, 100=0.11% 00:28:35.909 cpu : usr=98.27%, sys=1.29%, ctx=17, majf=0, minf=34 00:28:35.909 IO depths : 1=0.8%, 2=1.7%, 4=9.4%, 8=75.8%, 16=12.4%, 32=0.0%, >=64=0.0% 00:28:35.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.909 complete : 0=0.0%, 4=90.3%, 8=4.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.909 issued rwts: total=5585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.909 filename2: (groupid=0, jobs=1): err= 0: pid=2227038: Wed Jul 24 20:04:25 2024 00:28:35.909 read: IOPS=543, BW=2175KiB/s (2227kB/s)(21.2MiB/10003msec) 00:28:35.909 slat (nsec): min=6798, max=75966, avg=11412.47, stdev=6323.39 00:28:35.909 clat (usec): min=6344, max=65021, avg=29369.47, stdev=5727.12 00:28:35.909 lat (usec): min=6352, max=65039, avg=29380.88, stdev=5726.73 00:28:35.909 clat percentiles (usec): 00:28:35.909 | 1.00th=[16450], 5.00th=[22414], 10.00th=[24511], 20.00th=[25560], 00:28:35.909 | 30.00th=[26084], 40.00th=[26608], 50.00th=[27132], 60.00th=[29230], 00:28:35.909 | 70.00th=[32375], 80.00th=[34341], 90.00th=[36963], 95.00th=[39060], 00:28:35.909 | 99.00th=[46400], 99.50th=[49546], 99.90th=[52167], 99.95th=[64750], 00:28:35.909 | 99.99th=[65274] 00:28:35.909 bw ( KiB/s): min= 1728, max= 2368, per=3.98%, avg=2156.63, stdev=158.71, samples=19 00:28:35.909 iops : min= 432, max= 592, avg=539.16, stdev=39.68, samples=19 00:28:35.909 lat (msec) : 10=0.17%, 20=2.24%, 50=97.19%, 100=0.40% 00:28:35.909 cpu : usr=98.53%, sys=1.03%, ctx=15, majf=0, minf=53 00:28:35.909 IO depths : 1=0.2%, 2=0.5%, 4=6.9%, 8=77.6%, 16=14.9%, 32=0.0%, >=64=0.0% 00:28:35.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.909 complete : 0=0.0%, 4=90.3%, 8=6.4%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.909 issued rwts: total=5439,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.909 filename2: (groupid=0, jobs=1): err= 0: pid=2227039: Wed Jul 24 20:04:25 2024 00:28:35.909 read: IOPS=607, BW=2428KiB/s (2487kB/s)(23.8MiB/10022msec) 00:28:35.909 slat (nsec): min=4254, max=72057, avg=15528.20, stdev=9050.44 00:28:35.909 clat (usec): min=11763, max=43794, avg=26256.91, stdev=3978.00 00:28:35.909 lat (usec): min=11771, max=43808, avg=26272.43, stdev=3979.30 00:28:35.909 clat percentiles (usec): 00:28:35.909 | 1.00th=[13698], 5.00th=[18220], 10.00th=[23200], 20.00th=[25035], 00:28:35.909 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:28:35.909 | 70.00th=[26608], 80.00th=[27395], 90.00th=[31589], 95.00th=[34341], 00:28:35.909 | 99.00th=[36963], 99.50th=[38011], 99.90th=[43254], 99.95th=[43254], 00:28:35.909 | 99.99th=[43779] 00:28:35.909 bw ( KiB/s): min= 2304, max= 2664, per=4.48%, avg=2427.45, stdev=100.02, samples=20 00:28:35.909 iops : min= 576, max= 666, avg=606.85, stdev=24.98, samples=20 00:28:35.909 lat (msec) : 20=6.67%, 50=93.33% 00:28:35.909 cpu : usr=98.40%, sys=1.16%, ctx=27, majf=0, minf=33 00:28:35.909 IO depths : 1=1.2%, 2=2.6%, 4=8.9%, 8=74.5%, 16=12.7%, 32=0.0%, >=64=0.0% 00:28:35.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.909 complete : 0=0.0%, 4=90.2%, 8=5.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.909 issued rwts: total=6084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.909 filename2: (groupid=0, jobs=1): err= 0: pid=2227040: Wed Jul 24 20:04:25 2024 00:28:35.909 read: IOPS=539, BW=2159KiB/s (2211kB/s)(21.1MiB/10003msec) 00:28:35.909 slat (usec): min=6, max=105, avg=27.44, stdev=19.03 00:28:35.909 clat (usec): min=6654, max=52390, avg=29469.38, stdev=5410.89 00:28:35.909 lat (usec): min=6662, max=52413, avg=29496.82, stdev=5414.26 00:28:35.909 clat percentiles (usec): 00:28:35.909 | 1.00th=[16450], 5.00th=[23987], 10.00th=[25035], 20.00th=[25560], 00:28:35.909 | 30.00th=[26084], 40.00th=[26608], 50.00th=[27132], 60.00th=[30802], 00:28:35.909 | 70.00th=[33424], 80.00th=[34866], 90.00th=[35914], 95.00th=[37487], 00:28:35.909 | 99.00th=[44303], 99.50th=[49021], 99.90th=[51643], 99.95th=[52167], 00:28:35.909 | 99.99th=[52167] 00:28:35.909 bw ( KiB/s): min= 1792, max= 2360, per=3.95%, avg=2142.74, stdev=157.90, samples=19 00:28:35.909 iops : min= 448, max= 590, avg=535.68, stdev=39.48, samples=19 00:28:35.909 lat (msec) : 10=0.11%, 20=3.30%, 50=96.30%, 100=0.30% 00:28:35.909 cpu : usr=98.58%, sys=0.98%, ctx=16, majf=0, minf=63 00:28:35.909 IO depths : 1=1.6%, 2=3.2%, 4=10.8%, 8=72.4%, 16=12.0%, 32=0.0%, >=64=0.0% 00:28:35.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.909 complete : 0=0.0%, 4=90.6%, 8=4.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.909 issued rwts: total=5399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.909 00:28:35.909 Run status group 0 (all jobs): 00:28:35.909 READ: bw=52.9MiB/s (55.5MB/s), 2144KiB/s-2478KiB/s (2196kB/s-2537kB/s), io=531MiB (557MB), run=10003-10027msec 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.909 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.910 bdev_null0 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.910 [2024-07-24 20:04:25.924275] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.910 bdev_null1 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:35.910 { 00:28:35.910 "params": { 00:28:35.910 "name": "Nvme$subsystem", 00:28:35.910 "trtype": "$TEST_TRANSPORT", 00:28:35.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.910 "adrfam": "ipv4", 00:28:35.910 "trsvcid": "$NVMF_PORT", 00:28:35.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.910 "hdgst": ${hdgst:-false}, 00:28:35.910 "ddgst": ${ddgst:-false} 00:28:35.910 }, 00:28:35.910 "method": "bdev_nvme_attach_controller" 00:28:35.910 } 00:28:35.910 EOF 00:28:35.910 )") 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:35.910 { 00:28:35.910 "params": { 00:28:35.910 "name": "Nvme$subsystem", 00:28:35.910 "trtype": "$TEST_TRANSPORT", 00:28:35.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.910 "adrfam": "ipv4", 00:28:35.910 "trsvcid": "$NVMF_PORT", 00:28:35.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.910 "hdgst": ${hdgst:-false}, 00:28:35.910 "ddgst": ${ddgst:-false} 00:28:35.910 }, 00:28:35.910 "method": "bdev_nvme_attach_controller" 00:28:35.910 } 00:28:35.910 EOF 00:28:35.910 )") 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:35.910 20:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:35.910 "params": { 00:28:35.910 "name": "Nvme0", 00:28:35.910 "trtype": "tcp", 00:28:35.910 "traddr": "10.0.0.2", 00:28:35.910 "adrfam": "ipv4", 00:28:35.910 "trsvcid": "4420", 00:28:35.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.911 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:35.911 "hdgst": false, 00:28:35.911 "ddgst": false 00:28:35.911 }, 00:28:35.911 "method": "bdev_nvme_attach_controller" 00:28:35.911 },{ 00:28:35.911 "params": { 00:28:35.911 "name": "Nvme1", 00:28:35.911 "trtype": "tcp", 00:28:35.911 "traddr": "10.0.0.2", 00:28:35.911 "adrfam": "ipv4", 00:28:35.911 "trsvcid": "4420", 00:28:35.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:35.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:35.911 "hdgst": false, 00:28:35.911 "ddgst": false 00:28:35.911 }, 00:28:35.911 "method": "bdev_nvme_attach_controller" 00:28:35.911 }' 00:28:35.911 20:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:35.911 20:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:35.911 20:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:35.911 20:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:35.911 20:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:35.911 20:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:35.911 20:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:35.911 20:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:35.911 20:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:35.911 20:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:35.911 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:35.911 ... 00:28:35.911 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:35.911 ... 00:28:35.911 fio-3.35 00:28:35.911 Starting 4 threads 00:28:35.911 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.209 00:28:41.209 filename0: (groupid=0, jobs=1): err= 0: pid=2228858: Wed Jul 24 20:04:32 2024 00:28:41.209 read: IOPS=2715, BW=21.2MiB/s (22.2MB/s)(107MiB/5042msec) 00:28:41.209 slat (nsec): min=6079, max=67678, avg=11794.82, stdev=8255.62 00:28:41.209 clat (usec): min=1057, max=47295, avg=2903.90, stdev=4419.82 00:28:41.209 lat (usec): min=1063, max=47307, avg=2915.69, stdev=4420.10 00:28:41.209 clat percentiles (usec): 00:28:41.209 | 1.00th=[ 1303], 5.00th=[ 1532], 10.00th=[ 1647], 20.00th=[ 1860], 00:28:41.209 | 30.00th=[ 2073], 40.00th=[ 2245], 50.00th=[ 2343], 60.00th=[ 2507], 00:28:41.209 | 70.00th=[ 2704], 80.00th=[ 2999], 90.00th=[ 3425], 95.00th=[ 3818], 00:28:41.209 | 99.00th=[42730], 99.50th=[44827], 99.90th=[46924], 99.95th=[46924], 00:28:41.209 | 99.99th=[47449] 00:28:41.209 bw ( KiB/s): min=16800, max=28128, per=30.04%, avg=21902.40, stdev=4036.97, samples=10 00:28:41.209 iops : min= 2100, max= 3516, avg=2737.80, stdev=504.62, samples=10 00:28:41.209 lat (msec) : 2=26.63%, 4=69.64%, 10=2.64%, 50=1.09% 00:28:41.209 cpu : usr=97.40%, sys=2.26%, ctx=6, majf=0, minf=164 00:28:41.209 IO depths : 1=0.4%, 2=1.9%, 4=66.2%, 8=31.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.209 complete : 0=0.0%, 4=94.9%, 8=5.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.209 issued rwts: total=13694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.209 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:41.209 filename0: (groupid=0, jobs=1): err= 0: pid=2228859: Wed Jul 24 20:04:32 2024 00:28:41.209 read: IOPS=2983, BW=23.3MiB/s (24.4MB/s)(117MiB/5015msec) 00:28:41.209 slat (usec): min=6, max=838, avg=10.56, stdev= 9.44 00:28:41.209 clat (usec): min=968, max=46258, avg=2652.42, stdev=3435.81 00:28:41.209 lat (usec): min=975, max=46271, avg=2662.98, stdev=3436.16 00:28:41.209 clat percentiles (usec): 00:28:41.209 | 1.00th=[ 1303], 5.00th=[ 1450], 10.00th=[ 1582], 20.00th=[ 1778], 00:28:41.209 | 30.00th=[ 1975], 40.00th=[ 2147], 50.00th=[ 2311], 60.00th=[ 2442], 00:28:41.209 | 70.00th=[ 2638], 80.00th=[ 2933], 90.00th=[ 3359], 95.00th=[ 3752], 00:28:41.209 | 99.00th=[ 5080], 99.50th=[43779], 99.90th=[45876], 99.95th=[45876], 00:28:41.209 | 99.99th=[46400] 00:28:41.209 bw ( KiB/s): min=17520, max=30096, per=32.66%, avg=23816.89, stdev=3739.67, samples=9 00:28:41.209 iops : min= 2190, max= 3762, avg=2977.11, stdev=467.46, samples=9 00:28:41.209 lat (usec) : 1000=0.01% 00:28:41.209 lat (msec) : 2=31.76%, 4=65.03%, 10=2.55%, 50=0.64% 00:28:41.209 cpu : usr=96.47%, sys=2.61%, ctx=21, majf=0, minf=83 00:28:41.209 IO depths : 1=0.3%, 2=1.9%, 4=65.7%, 8=32.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.209 complete : 0=0.0%, 4=95.2%, 8=4.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.209 issued rwts: total=14964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.209 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:41.209 filename1: (groupid=0, jobs=1): err= 0: pid=2228860: Wed Jul 24 20:04:32 2024 00:28:41.209 read: IOPS=808, BW=6466KiB/s (6621kB/s)(31.7MiB/5023msec) 00:28:41.209 slat (nsec): min=5953, max=41820, avg=9816.85, stdev=5391.97 00:28:41.209 clat (usec): min=1036, max=47156, avg=9862.96, stdev=15354.89 00:28:41.209 lat (usec): min=1049, max=47169, avg=9872.77, stdev=15354.96 00:28:41.209 clat percentiles (usec): 00:28:41.209 | 1.00th=[ 1713], 5.00th=[ 2073], 10.00th=[ 2442], 20.00th=[ 2704], 00:28:41.209 | 30.00th=[ 2933], 40.00th=[ 3064], 50.00th=[ 3294], 60.00th=[ 3523], 00:28:41.209 | 70.00th=[ 3884], 80.00th=[ 4817], 90.00th=[44827], 95.00th=[45876], 00:28:41.209 | 99.00th=[46400], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:28:41.209 | 99.99th=[46924] 00:28:41.209 bw ( KiB/s): min= 1408, max=17120, per=8.89%, avg=6483.20, stdev=5822.68, samples=10 00:28:41.209 iops : min= 176, max= 2140, avg=810.40, stdev=727.84, samples=10 00:28:41.209 lat (msec) : 2=4.06%, 4=67.71%, 10=12.46%, 50=15.76% 00:28:41.209 cpu : usr=98.55%, sys=1.12%, ctx=8, majf=0, minf=68 00:28:41.209 IO depths : 1=2.8%, 2=7.3%, 4=65.7%, 8=24.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.209 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.209 issued rwts: total=4060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.209 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:41.209 filename1: (groupid=0, jobs=1): err= 0: pid=2228861: Wed Jul 24 20:04:32 2024 00:28:41.209 read: IOPS=2646, BW=20.7MiB/s (21.7MB/s)(103MiB/5002msec) 00:28:41.209 slat (nsec): min=5924, max=48772, avg=10281.09, stdev=6055.96 00:28:41.209 clat (usec): min=922, max=45557, avg=2994.69, stdev=4617.20 00:28:41.209 lat (usec): min=944, max=45580, avg=3004.97, stdev=4617.23 00:28:41.209 clat percentiles (usec): 00:28:41.209 | 1.00th=[ 1336], 5.00th=[ 1450], 10.00th=[ 1614], 20.00th=[ 1860], 00:28:41.209 | 30.00th=[ 2073], 40.00th=[ 2245], 50.00th=[ 2376], 60.00th=[ 2573], 00:28:41.209 | 70.00th=[ 2802], 80.00th=[ 3097], 90.00th=[ 3556], 95.00th=[ 3949], 00:28:41.209 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45351], 99.95th=[45351], 00:28:41.209 | 99.99th=[45351] 00:28:41.209 bw ( KiB/s): min=16048, max=30032, per=29.34%, avg=21397.33, stdev=4143.72, samples=9 00:28:41.209 iops : min= 2006, max= 3754, avg=2674.67, stdev=517.97, samples=9 00:28:41.209 lat (usec) : 1000=0.03% 00:28:41.209 lat (msec) : 2=26.26%, 4=69.21%, 10=3.29%, 50=1.21% 00:28:41.209 cpu : usr=97.14%, sys=2.48%, ctx=9, majf=0, minf=79 00:28:41.209 IO depths : 1=0.3%, 2=2.0%, 4=65.5%, 8=32.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.209 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.209 issued rwts: total=13239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.209 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:41.209 00:28:41.209 Run status group 0 (all jobs): 00:28:41.209 READ: bw=71.2MiB/s (74.7MB/s), 6466KiB/s-23.3MiB/s (6621kB/s-24.4MB/s), io=359MiB (376MB), run=5002-5042msec 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.209 00:28:41.209 real 0m24.297s 00:28:41.209 user 4m51.586s 00:28:41.209 sys 0m4.527s 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:41.209 20:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:41.209 ************************************ 00:28:41.209 END TEST fio_dif_rand_params 00:28:41.209 ************************************ 00:28:41.209 20:04:32 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:41.209 20:04:32 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:41.209 20:04:32 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:41.209 20:04:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:41.209 ************************************ 00:28:41.209 START TEST fio_dif_digest 00:28:41.209 ************************************ 00:28:41.209 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:28:41.209 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:41.209 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:41.209 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:41.209 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:41.209 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:41.209 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:41.209 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:41.209 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:41.209 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:41.209 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:41.209 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.210 bdev_null0 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.210 [2024-07-24 20:04:32.468099] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:41.210 { 00:28:41.210 "params": { 00:28:41.210 "name": "Nvme$subsystem", 00:28:41.210 "trtype": "$TEST_TRANSPORT", 00:28:41.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.210 "adrfam": "ipv4", 00:28:41.210 "trsvcid": "$NVMF_PORT", 00:28:41.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.210 "hdgst": ${hdgst:-false}, 00:28:41.210 "ddgst": ${ddgst:-false} 00:28:41.210 }, 00:28:41.210 "method": "bdev_nvme_attach_controller" 00:28:41.210 } 00:28:41.210 EOF 00:28:41.210 )") 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:41.210 "params": { 00:28:41.210 "name": "Nvme0", 00:28:41.210 "trtype": "tcp", 00:28:41.210 "traddr": "10.0.0.2", 00:28:41.210 "adrfam": "ipv4", 00:28:41.210 "trsvcid": "4420", 00:28:41.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:41.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:41.210 "hdgst": true, 00:28:41.210 "ddgst": true 00:28:41.210 }, 00:28:41.210 "method": "bdev_nvme_attach_controller" 00:28:41.210 }' 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:41.210 20:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:41.468 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:41.468 ... 00:28:41.468 fio-3.35 00:28:41.468 Starting 3 threads 00:28:41.468 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.663 00:28:53.663 filename0: (groupid=0, jobs=1): err= 0: pid=2230128: Wed Jul 24 20:04:43 2024 00:28:53.663 read: IOPS=327, BW=41.0MiB/s (42.9MB/s)(410MiB/10005msec) 00:28:53.663 slat (nsec): min=6439, max=24987, avg=10429.21, stdev=2389.16 00:28:53.663 clat (usec): min=5418, max=56459, avg=9143.55, stdev=3999.92 00:28:53.663 lat (usec): min=5425, max=56466, avg=9153.98, stdev=4000.50 00:28:53.663 clat percentiles (usec): 00:28:53.663 | 1.00th=[ 5669], 5.00th=[ 5932], 10.00th=[ 6325], 20.00th=[ 6980], 00:28:53.663 | 30.00th=[ 7439], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9241], 00:28:53.663 | 70.00th=[10028], 80.00th=[10814], 90.00th=[11600], 95.00th=[12780], 00:28:53.663 | 99.00th=[15664], 99.50th=[52167], 99.90th=[56361], 99.95th=[56361], 00:28:53.663 | 99.99th=[56361] 00:28:53.663 bw ( KiB/s): min=32512, max=48640, per=48.06%, avg=41862.74, stdev=5488.84, samples=19 00:28:53.663 iops : min= 254, max= 380, avg=327.05, stdev=42.88, samples=19 00:28:53.663 lat (msec) : 10=70.04%, 20=29.32%, 50=0.09%, 100=0.55% 00:28:53.663 cpu : usr=94.58%, sys=4.99%, ctx=19, majf=0, minf=180 00:28:53.663 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:53.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.663 issued rwts: total=3278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.664 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:53.664 filename0: (groupid=0, jobs=1): err= 0: pid=2230129: Wed Jul 24 20:04:43 2024 00:28:53.664 read: IOPS=177, BW=22.2MiB/s (23.3MB/s)(223MiB/10045msec) 00:28:53.664 slat (nsec): min=6415, max=25311, avg=11257.99, stdev=2186.74 00:28:53.664 clat (usec): min=6006, max=97435, avg=16862.88, stdev=14150.78 00:28:53.664 lat (usec): min=6017, max=97448, avg=16874.14, stdev=14150.73 00:28:53.664 clat percentiles (usec): 00:28:53.664 | 1.00th=[ 7635], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[10159], 00:28:53.664 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:28:53.664 | 70.00th=[13304], 80.00th=[14746], 90.00th=[52167], 95.00th=[54789], 00:28:53.664 | 99.00th=[57934], 99.50th=[58459], 99.90th=[60031], 99.95th=[96994], 00:28:53.664 | 99.99th=[96994] 00:28:53.664 bw ( KiB/s): min=17152, max=29952, per=26.17%, avg=22796.80, stdev=3946.30, samples=20 00:28:53.664 iops : min= 134, max= 234, avg=178.10, stdev=30.83, samples=20 00:28:53.664 lat (msec) : 10=17.50%, 20=70.16%, 50=0.56%, 100=11.78% 00:28:53.664 cpu : usr=96.03%, sys=3.61%, ctx=15, majf=0, minf=182 00:28:53.664 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:53.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.664 issued rwts: total=1783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.664 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:53.664 filename0: (groupid=0, jobs=1): err= 0: pid=2230130: Wed Jul 24 20:04:43 2024 00:28:53.664 read: IOPS=177, BW=22.2MiB/s (23.2MB/s)(222MiB/10012msec) 00:28:53.664 slat (nsec): min=6524, max=29455, avg=11671.85, stdev=1805.11 00:28:53.664 clat (usec): min=6466, max=57704, avg=16905.42, stdev=14001.89 00:28:53.664 lat (usec): min=6473, max=57716, avg=16917.09, stdev=14001.89 00:28:53.664 clat percentiles (usec): 00:28:53.664 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[10028], 00:28:53.664 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11994], 60.00th=[12649], 00:28:53.664 | 70.00th=[13304], 80.00th=[14353], 90.00th=[51643], 95.00th=[54264], 00:28:53.664 | 99.00th=[56361], 99.50th=[56886], 99.90th=[57410], 99.95th=[57934], 00:28:53.664 | 99.99th=[57934] 00:28:53.664 bw ( KiB/s): min=15872, max=27136, per=26.05%, avg=22694.40, stdev=3004.19, samples=20 00:28:53.664 iops : min= 124, max= 212, avg=177.30, stdev=23.47, samples=20 00:28:53.664 lat (msec) : 10=19.83%, 20=67.49%, 50=0.73%, 100=11.94% 00:28:53.664 cpu : usr=96.02%, sys=3.66%, ctx=16, majf=0, minf=155 00:28:53.664 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:53.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.664 issued rwts: total=1775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.664 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:53.664 00:28:53.664 Run status group 0 (all jobs): 00:28:53.664 READ: bw=85.1MiB/s (89.2MB/s), 22.2MiB/s-41.0MiB/s (23.2MB/s-42.9MB/s), io=855MiB (896MB), run=10005-10045msec 00:28:53.664 20:04:43 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:53.664 20:04:43 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:53.664 20:04:43 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:53.664 20:04:43 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:53.664 20:04:43 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:53.664 20:04:43 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:53.664 20:04:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.664 20:04:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:53.664 20:04:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.664 20:04:43 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:53.664 20:04:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.664 20:04:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:53.664 20:04:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.664 00:28:53.664 real 0m11.065s 00:28:53.664 user 0m35.183s 00:28:53.664 sys 0m1.515s 00:28:53.664 20:04:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:53.664 20:04:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:53.664 ************************************ 00:28:53.664 END TEST fio_dif_digest 00:28:53.664 ************************************ 00:28:53.664 20:04:43 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:53.664 20:04:43 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:53.664 20:04:43 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:53.664 20:04:43 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:53.664 20:04:43 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:53.664 20:04:43 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:53.664 20:04:43 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:53.664 20:04:43 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:53.664 rmmod nvme_tcp 00:28:53.664 rmmod nvme_fabrics 00:28:53.664 rmmod nvme_keyring 00:28:53.664 20:04:43 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:53.664 20:04:43 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:53.664 20:04:43 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:53.664 20:04:43 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2221463 ']' 00:28:53.664 20:04:43 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2221463 00:28:53.664 20:04:43 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2221463 ']' 00:28:53.664 20:04:43 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2221463 00:28:53.664 20:04:43 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:28:53.664 20:04:43 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:53.664 20:04:43 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2221463 00:28:53.664 20:04:43 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:53.664 20:04:43 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:53.664 20:04:43 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2221463' 00:28:53.664 killing process with pid 2221463 00:28:53.664 20:04:43 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2221463 00:28:53.664 20:04:43 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2221463 00:28:53.664 20:04:43 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:53.664 20:04:43 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:55.076 Waiting for block devices as requested 00:28:55.076 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:55.076 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:55.076 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:55.076 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:55.335 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:55.335 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:55.335 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:55.335 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:55.594 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:55.594 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:55.594 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:55.594 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:55.852 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:55.852 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:55.852 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:56.118 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:56.118 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:56.118 20:04:47 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:56.118 20:04:47 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:56.118 20:04:47 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:56.118 20:04:47 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:56.119 20:04:47 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.119 20:04:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:56.119 20:04:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.026 20:04:49 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:58.285 00:28:58.285 real 1m13.550s 00:28:58.285 user 7m9.154s 00:28:58.285 sys 0m18.421s 00:28:58.285 20:04:49 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.285 20:04:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:58.285 ************************************ 00:28:58.285 END TEST nvmf_dif 00:28:58.285 ************************************ 00:28:58.285 20:04:49 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:58.285 20:04:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:58.285 20:04:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.285 20:04:49 -- common/autotest_common.sh@10 -- # set +x 00:28:58.285 ************************************ 00:28:58.285 START TEST nvmf_abort_qd_sizes 00:28:58.285 ************************************ 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:58.285 * Looking for test storage... 00:28:58.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:28:58.285 20:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:03.555 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:03.555 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:03.555 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:03.556 Found net devices under 0000:86:00.0: cvl_0_0 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:03.556 Found net devices under 0000:86:00.1: cvl_0_1 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:03.556 20:04:54 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:03.556 20:04:55 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:03.556 20:04:55 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:03.556 20:04:55 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:03.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:03.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:29:03.556 00:29:03.556 --- 10.0.0.2 ping statistics --- 00:29:03.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.556 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:29:03.556 20:04:55 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:03.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:03.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.396 ms 00:29:03.556 00:29:03.556 --- 10.0.0.1 ping statistics --- 00:29:03.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.556 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:29:03.556 20:04:55 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:03.556 20:04:55 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:03.556 20:04:55 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:03.556 20:04:55 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:06.841 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:06.841 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:06.841 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:06.841 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:06.841 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:06.841 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:06.841 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:06.841 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:06.841 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:06.841 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:06.841 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:06.841 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:06.841 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:06.841 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:06.841 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:06.841 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:07.407 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2237890 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2237890 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2237890 ']' 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:07.408 20:04:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:07.408 [2024-07-24 20:04:58.923997] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:29:07.408 [2024-07-24 20:04:58.924039] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.408 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.408 [2024-07-24 20:04:58.981682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:07.667 [2024-07-24 20:04:59.070027] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.667 [2024-07-24 20:04:59.070070] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.667 [2024-07-24 20:04:59.070078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.667 [2024-07-24 20:04:59.070086] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.667 [2024-07-24 20:04:59.070091] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.667 [2024-07-24 20:04:59.073063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.667 [2024-07-24 20:04:59.073098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:07.667 [2024-07-24 20:04:59.073100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.667 [2024-07-24 20:04:59.073080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:08.234 20:04:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:08.235 20:04:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:08.235 ************************************ 00:29:08.235 START TEST spdk_target_abort 00:29:08.235 ************************************ 00:29:08.235 20:04:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:29:08.235 20:04:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:08.235 20:04:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:29:08.235 20:04:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.235 20:04:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:11.523 spdk_targetn1 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:11.523 [2024-07-24 20:05:02.651740] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:11.523 [2024-07-24 20:05:02.684753] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:11.523 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:11.524 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:11.524 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:11.524 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:11.524 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:11.524 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:11.524 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:11.524 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:11.524 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:11.524 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:11.524 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:11.524 20:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:11.524 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.811 Initializing NVMe Controllers 00:29:14.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:14.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:14.811 Initialization complete. Launching workers. 00:29:14.811 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5137, failed: 0 00:29:14.811 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1755, failed to submit 3382 00:29:14.811 success 898, unsuccess 857, failed 0 00:29:14.811 20:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:14.811 20:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:14.811 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.099 Initializing NVMe Controllers 00:29:18.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:18.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:18.099 Initialization complete. Launching workers. 00:29:18.099 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8666, failed: 0 00:29:18.099 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1221, failed to submit 7445 00:29:18.099 success 342, unsuccess 879, failed 0 00:29:18.099 20:05:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:18.099 20:05:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:18.099 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.388 Initializing NVMe Controllers 00:29:21.388 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:21.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:21.388 Initialization complete. Launching workers. 00:29:21.388 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34064, failed: 0 00:29:21.388 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2781, failed to submit 31283 00:29:21.388 success 681, unsuccess 2100, failed 0 00:29:21.388 20:05:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:21.388 20:05:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.388 20:05:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.388 20:05:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.388 20:05:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:21.388 20:05:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.388 20:05:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:22.361 20:05:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.361 20:05:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2237890 00:29:22.361 20:05:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2237890 ']' 00:29:22.361 20:05:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2237890 00:29:22.361 20:05:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:29:22.361 20:05:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:22.361 20:05:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2237890 00:29:22.361 20:05:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:22.361 20:05:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:22.361 20:05:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2237890' 00:29:22.361 killing process with pid 2237890 00:29:22.622 20:05:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2237890 00:29:22.622 20:05:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2237890 00:29:22.622 00:29:22.622 real 0m14.321s 00:29:22.622 user 0m57.266s 00:29:22.622 sys 0m2.055s 00:29:22.622 20:05:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:22.622 20:05:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:22.622 ************************************ 00:29:22.622 END TEST spdk_target_abort 00:29:22.622 ************************************ 00:29:22.622 20:05:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:22.622 20:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:22.622 20:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:22.622 20:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:22.622 ************************************ 00:29:22.622 START TEST kernel_target_abort 00:29:22.622 ************************************ 00:29:22.622 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:29:22.622 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:22.622 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:22.622 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:22.622 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:22.622 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.622 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.622 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:22.622 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.622 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:22.622 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:22.882 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:22.882 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:22.882 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:22.882 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:22.882 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:22.882 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:22.882 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:22.882 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:22.882 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:22.882 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:22.882 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:22.882 20:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:25.417 Waiting for block devices as requested 00:29:25.417 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:25.417 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:25.417 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:25.676 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:25.676 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:25.676 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:25.676 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:25.934 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:25.934 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:25.934 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:26.193 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:26.193 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:26.193 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:26.193 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:26.451 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:26.451 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:26.451 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:26.451 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:26.451 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:26.451 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:26.451 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:26.451 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:26.451 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:26.451 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:26.451 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:26.451 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:26.710 No valid GPT data, bailing 00:29:26.710 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:26.710 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:26.710 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:26.710 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:26.710 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:26.710 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:26.710 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:26.710 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:26.710 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:26.710 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:26.710 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:26.710 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:26.710 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:26.711 00:29:26.711 Discovery Log Number of Records 2, Generation counter 2 00:29:26.711 =====Discovery Log Entry 0====== 00:29:26.711 trtype: tcp 00:29:26.711 adrfam: ipv4 00:29:26.711 subtype: current discovery subsystem 00:29:26.711 treq: not specified, sq flow control disable supported 00:29:26.711 portid: 1 00:29:26.711 trsvcid: 4420 00:29:26.711 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:26.711 traddr: 10.0.0.1 00:29:26.711 eflags: none 00:29:26.711 sectype: none 00:29:26.711 =====Discovery Log Entry 1====== 00:29:26.711 trtype: tcp 00:29:26.711 adrfam: ipv4 00:29:26.711 subtype: nvme subsystem 00:29:26.711 treq: not specified, sq flow control disable supported 00:29:26.711 portid: 1 00:29:26.711 trsvcid: 4420 00:29:26.711 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:26.711 traddr: 10.0.0.1 00:29:26.711 eflags: none 00:29:26.711 sectype: none 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:26.711 20:05:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:26.711 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.994 Initializing NVMe Controllers 00:29:29.994 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:29.994 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:29.994 Initialization complete. Launching workers. 00:29:29.994 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30536, failed: 0 00:29:29.994 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30536, failed to submit 0 00:29:29.994 success 0, unsuccess 30536, failed 0 00:29:29.994 20:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:29.994 20:05:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:29.994 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.277 Initializing NVMe Controllers 00:29:33.277 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:33.277 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:33.277 Initialization complete. Launching workers. 00:29:33.277 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64526, failed: 0 00:29:33.277 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16286, failed to submit 48240 00:29:33.277 success 0, unsuccess 16286, failed 0 00:29:33.277 20:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:33.277 20:05:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:33.277 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.807 Initializing NVMe Controllers 00:29:35.807 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:35.807 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:35.807 Initialization complete. Launching workers. 00:29:35.807 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63357, failed: 0 00:29:35.807 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15846, failed to submit 47511 00:29:35.807 success 0, unsuccess 15846, failed 0 00:29:35.807 20:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:35.807 20:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:35.807 20:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:35.807 20:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:35.807 20:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:35.807 20:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:35.807 20:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:35.807 20:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:35.807 20:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:35.807 20:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:39.091 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:39.091 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:39.091 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:39.091 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:39.091 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:39.091 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:39.091 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:39.091 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:39.091 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:39.091 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:39.091 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:39.091 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:39.091 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:39.091 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:39.091 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:39.091 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:39.658 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:39.658 00:29:39.658 real 0m16.813s 00:29:39.658 user 0m4.569s 00:29:39.658 sys 0m5.422s 00:29:39.658 20:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:39.658 20:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:39.658 ************************************ 00:29:39.658 END TEST kernel_target_abort 00:29:39.658 ************************************ 00:29:39.658 20:05:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:39.658 20:05:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:39.658 20:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:39.658 20:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:39.658 20:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:39.658 20:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:39.658 20:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:39.658 20:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:39.658 rmmod nvme_tcp 00:29:39.658 rmmod nvme_fabrics 00:29:39.658 rmmod nvme_keyring 00:29:39.658 20:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:39.658 20:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:39.658 20:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:39.658 20:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2237890 ']' 00:29:39.658 20:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2237890 00:29:39.658 20:05:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2237890 ']' 00:29:39.658 20:05:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2237890 00:29:39.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2237890) - No such process 00:29:39.658 20:05:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2237890 is not found' 00:29:39.658 Process with pid 2237890 is not found 00:29:39.659 20:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:39.659 20:05:31 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:42.197 Waiting for block devices as requested 00:29:42.197 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:42.456 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:42.456 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:42.456 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:42.716 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:42.716 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:42.716 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:42.716 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:42.975 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:42.975 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:42.975 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:42.975 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:43.234 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:43.234 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:43.234 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:43.493 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:43.493 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:43.493 20:05:34 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:43.493 20:05:34 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:43.493 20:05:34 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:43.493 20:05:34 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:43.493 20:05:34 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.493 20:05:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:43.493 20:05:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.029 20:05:37 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:46.029 00:29:46.029 real 0m47.353s 00:29:46.029 user 1m5.883s 00:29:46.029 sys 0m15.582s 00:29:46.029 20:05:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:46.029 20:05:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:46.029 ************************************ 00:29:46.029 END TEST nvmf_abort_qd_sizes 00:29:46.029 ************************************ 00:29:46.029 20:05:37 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:46.029 20:05:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:46.029 20:05:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:46.029 20:05:37 -- common/autotest_common.sh@10 -- # set +x 00:29:46.029 ************************************ 00:29:46.029 START TEST keyring_file 00:29:46.029 ************************************ 00:29:46.029 20:05:37 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:46.029 * Looking for test storage... 00:29:46.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:46.029 20:05:37 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:46.029 20:05:37 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:46.029 20:05:37 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:46.029 20:05:37 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.029 20:05:37 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.030 20:05:37 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.030 20:05:37 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.030 20:05:37 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.030 20:05:37 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.030 20:05:37 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:46.030 20:05:37 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:46.030 20:05:37 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:46.030 20:05:37 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:46.030 20:05:37 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:46.030 20:05:37 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:46.030 20:05:37 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:46.030 20:05:37 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IEvr5ZwvNt 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IEvr5ZwvNt 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IEvr5ZwvNt 00:29:46.030 20:05:37 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.IEvr5ZwvNt 00:29:46.030 20:05:37 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pyKTq3X262 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:46.030 20:05:37 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pyKTq3X262 00:29:46.030 20:05:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pyKTq3X262 00:29:46.030 20:05:37 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.pyKTq3X262 00:29:46.030 20:05:37 keyring_file -- keyring/file.sh@30 -- # tgtpid=2246687 00:29:46.030 20:05:37 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2246687 00:29:46.030 20:05:37 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:46.030 20:05:37 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2246687 ']' 00:29:46.030 20:05:37 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.030 20:05:37 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:46.030 20:05:37 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.030 20:05:37 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:46.030 20:05:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:46.030 [2024-07-24 20:05:37.354789] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:29:46.030 [2024-07-24 20:05:37.354839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2246687 ] 00:29:46.030 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.030 [2024-07-24 20:05:37.407836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.030 [2024-07-24 20:05:37.487254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.598 20:05:38 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:46.598 20:05:38 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:29:46.598 20:05:38 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:46.598 20:05:38 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.598 20:05:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:46.598 [2024-07-24 20:05:38.159575] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.598 null0 00:29:46.598 [2024-07-24 20:05:38.191650] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:46.598 [2024-07-24 20:05:38.191953] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:46.860 [2024-07-24 20:05:38.199643] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:46.860 20:05:38 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.860 20:05:38 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:46.860 20:05:38 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:46.860 20:05:38 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:46.860 20:05:38 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:46.860 20:05:38 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:46.861 20:05:38 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:46.861 20:05:38 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:46.861 20:05:38 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:46.861 20:05:38 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.861 20:05:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:46.861 [2024-07-24 20:05:38.211675] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:46.861 request: 00:29:46.861 { 00:29:46.861 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:46.861 "secure_channel": false, 00:29:46.861 "listen_address": { 00:29:46.861 "trtype": "tcp", 00:29:46.861 "traddr": "127.0.0.1", 00:29:46.861 "trsvcid": "4420" 00:29:46.861 }, 00:29:46.861 "method": "nvmf_subsystem_add_listener", 00:29:46.861 "req_id": 1 00:29:46.861 } 00:29:46.861 Got JSON-RPC error response 00:29:46.861 response: 00:29:46.861 { 00:29:46.861 "code": -32602, 00:29:46.861 "message": "Invalid parameters" 00:29:46.861 } 00:29:46.861 20:05:38 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:46.861 20:05:38 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:46.861 20:05:38 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:46.861 20:05:38 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:46.861 20:05:38 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:46.861 20:05:38 keyring_file -- keyring/file.sh@46 -- # bperfpid=2246765 00:29:46.861 20:05:38 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:46.861 20:05:38 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2246765 /var/tmp/bperf.sock 00:29:46.861 20:05:38 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2246765 ']' 00:29:46.861 20:05:38 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:46.861 20:05:38 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:46.861 20:05:38 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:46.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:46.861 20:05:38 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:46.861 20:05:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:46.861 [2024-07-24 20:05:38.263311] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:29:46.861 [2024-07-24 20:05:38.263353] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2246765 ] 00:29:46.861 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.861 [2024-07-24 20:05:38.315673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.861 [2024-07-24 20:05:38.395750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.486 20:05:39 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:47.486 20:05:39 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:29:47.486 20:05:39 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IEvr5ZwvNt 00:29:47.486 20:05:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IEvr5ZwvNt 00:29:47.746 20:05:39 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pyKTq3X262 00:29:47.746 20:05:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pyKTq3X262 00:29:48.005 20:05:39 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:48.005 20:05:39 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:48.005 20:05:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:48.005 20:05:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:48.005 20:05:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:48.264 20:05:39 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.IEvr5ZwvNt == \/\t\m\p\/\t\m\p\.\I\E\v\r\5\Z\w\v\N\t ]] 00:29:48.265 20:05:39 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:48.265 20:05:39 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:48.265 20:05:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:48.265 20:05:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:48.265 20:05:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:48.265 20:05:39 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.pyKTq3X262 == \/\t\m\p\/\t\m\p\.\p\y\K\T\q\3\X\2\6\2 ]] 00:29:48.265 20:05:39 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:48.265 20:05:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:48.265 20:05:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:48.265 20:05:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:48.265 20:05:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:48.265 20:05:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:48.523 20:05:39 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:48.523 20:05:39 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:48.523 20:05:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:48.523 20:05:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:48.523 20:05:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:48.523 20:05:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:48.523 20:05:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:48.782 20:05:40 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:48.782 20:05:40 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:48.782 20:05:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:48.782 [2024-07-24 20:05:40.330801] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:49.041 nvme0n1 00:29:49.041 20:05:40 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:49.041 20:05:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:49.041 20:05:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:49.041 20:05:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:49.041 20:05:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:49.041 20:05:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:49.041 20:05:40 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:49.041 20:05:40 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:49.041 20:05:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:49.041 20:05:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:49.041 20:05:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:49.041 20:05:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:49.041 20:05:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:49.299 20:05:40 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:49.299 20:05:40 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:49.300 Running I/O for 1 seconds... 00:29:50.678 00:29:50.678 Latency(us) 00:29:50.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.678 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:50.678 nvme0n1 : 1.03 3880.76 15.16 0.00 0.00 32628.12 7351.43 47413.87 00:29:50.678 =================================================================================================================== 00:29:50.678 Total : 3880.76 15.16 0.00 0.00 32628.12 7351.43 47413.87 00:29:50.678 0 00:29:50.678 20:05:41 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:50.678 20:05:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:50.678 20:05:42 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:50.678 20:05:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:50.678 20:05:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:50.678 20:05:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:50.678 20:05:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:50.678 20:05:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:50.678 20:05:42 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:50.678 20:05:42 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:50.678 20:05:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:50.678 20:05:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:50.937 20:05:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:50.937 20:05:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:50.937 20:05:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:50.937 20:05:42 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:50.937 20:05:42 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:50.937 20:05:42 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:50.937 20:05:42 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:50.937 20:05:42 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:50.937 20:05:42 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:50.937 20:05:42 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:50.937 20:05:42 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:50.937 20:05:42 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:50.937 20:05:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:51.196 [2024-07-24 20:05:42.620594] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:51.196 [2024-07-24 20:05:42.621015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3820 (107): Transport endpoint is not connected 00:29:51.196 [2024-07-24 20:05:42.622009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3820 (9): Bad file descriptor 00:29:51.196 [2024-07-24 20:05:42.623009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:51.196 [2024-07-24 20:05:42.623018] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:51.196 [2024-07-24 20:05:42.623025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:51.196 request: 00:29:51.196 { 00:29:51.196 "name": "nvme0", 00:29:51.196 "trtype": "tcp", 00:29:51.196 "traddr": "127.0.0.1", 00:29:51.196 "adrfam": "ipv4", 00:29:51.196 "trsvcid": "4420", 00:29:51.196 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:51.196 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:51.196 "prchk_reftag": false, 00:29:51.196 "prchk_guard": false, 00:29:51.196 "hdgst": false, 00:29:51.197 "ddgst": false, 00:29:51.197 "psk": "key1", 00:29:51.197 "method": "bdev_nvme_attach_controller", 00:29:51.197 "req_id": 1 00:29:51.197 } 00:29:51.197 Got JSON-RPC error response 00:29:51.197 response: 00:29:51.197 { 00:29:51.197 "code": -5, 00:29:51.197 "message": "Input/output error" 00:29:51.197 } 00:29:51.197 20:05:42 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:51.197 20:05:42 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:51.197 20:05:42 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:51.197 20:05:42 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:51.197 20:05:42 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:51.197 20:05:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:51.197 20:05:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:51.197 20:05:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:51.197 20:05:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:51.197 20:05:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:51.455 20:05:42 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:51.455 20:05:42 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:51.455 20:05:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:51.455 20:05:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:51.455 20:05:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:51.455 20:05:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:51.456 20:05:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:51.456 20:05:42 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:51.456 20:05:42 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:51.456 20:05:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:51.715 20:05:43 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:51.715 20:05:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:51.983 20:05:43 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:51.983 20:05:43 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:51.983 20:05:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:51.983 20:05:43 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:51.983 20:05:43 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.IEvr5ZwvNt 00:29:51.983 20:05:43 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.IEvr5ZwvNt 00:29:51.983 20:05:43 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:51.983 20:05:43 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.IEvr5ZwvNt 00:29:51.983 20:05:43 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:51.983 20:05:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:51.983 20:05:43 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:51.983 20:05:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:51.983 20:05:43 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IEvr5ZwvNt 00:29:51.983 20:05:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IEvr5ZwvNt 00:29:52.245 [2024-07-24 20:05:43.700964] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.IEvr5ZwvNt': 0100660 00:29:52.245 [2024-07-24 20:05:43.700988] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:52.245 request: 00:29:52.245 { 00:29:52.245 "name": "key0", 00:29:52.245 "path": "/tmp/tmp.IEvr5ZwvNt", 00:29:52.245 "method": "keyring_file_add_key", 00:29:52.245 "req_id": 1 00:29:52.245 } 00:29:52.245 Got JSON-RPC error response 00:29:52.245 response: 00:29:52.245 { 00:29:52.245 "code": -1, 00:29:52.245 "message": "Operation not permitted" 00:29:52.245 } 00:29:52.245 20:05:43 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:52.245 20:05:43 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:52.245 20:05:43 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:52.245 20:05:43 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:52.245 20:05:43 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.IEvr5ZwvNt 00:29:52.245 20:05:43 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IEvr5ZwvNt 00:29:52.245 20:05:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IEvr5ZwvNt 00:29:52.504 20:05:43 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.IEvr5ZwvNt 00:29:52.504 20:05:43 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:52.504 20:05:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:52.504 20:05:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:52.504 20:05:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:52.504 20:05:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:52.504 20:05:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:52.504 20:05:44 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:52.504 20:05:44 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:52.504 20:05:44 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:52.504 20:05:44 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:52.504 20:05:44 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:52.504 20:05:44 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:52.504 20:05:44 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:52.504 20:05:44 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:52.504 20:05:44 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:52.504 20:05:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:52.763 [2024-07-24 20:05:44.258442] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.IEvr5ZwvNt': No such file or directory 00:29:52.763 [2024-07-24 20:05:44.258463] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:52.763 [2024-07-24 20:05:44.258482] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:52.763 [2024-07-24 20:05:44.258488] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:52.763 [2024-07-24 20:05:44.258493] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:52.763 request: 00:29:52.763 { 00:29:52.763 "name": "nvme0", 00:29:52.763 "trtype": "tcp", 00:29:52.763 "traddr": "127.0.0.1", 00:29:52.763 "adrfam": "ipv4", 00:29:52.763 "trsvcid": "4420", 00:29:52.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:52.763 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:52.763 "prchk_reftag": false, 00:29:52.763 "prchk_guard": false, 00:29:52.763 "hdgst": false, 00:29:52.763 "ddgst": false, 00:29:52.763 "psk": "key0", 00:29:52.763 "method": "bdev_nvme_attach_controller", 00:29:52.763 "req_id": 1 00:29:52.763 } 00:29:52.763 Got JSON-RPC error response 00:29:52.763 response: 00:29:52.763 { 00:29:52.763 "code": -19, 00:29:52.763 "message": "No such device" 00:29:52.763 } 00:29:52.763 20:05:44 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:52.763 20:05:44 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:52.763 20:05:44 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:52.763 20:05:44 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:52.763 20:05:44 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:52.763 20:05:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:53.023 20:05:44 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:53.023 20:05:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:53.023 20:05:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:53.023 20:05:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:53.023 20:05:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:53.023 20:05:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:53.023 20:05:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UKxnFvovW9 00:29:53.023 20:05:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:53.023 20:05:44 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:53.023 20:05:44 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:53.023 20:05:44 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:53.023 20:05:44 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:53.023 20:05:44 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:53.023 20:05:44 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:53.023 20:05:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UKxnFvovW9 00:29:53.023 20:05:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UKxnFvovW9 00:29:53.023 20:05:44 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.UKxnFvovW9 00:29:53.023 20:05:44 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UKxnFvovW9 00:29:53.023 20:05:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UKxnFvovW9 00:29:53.282 20:05:44 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:53.282 20:05:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:53.541 nvme0n1 00:29:53.541 20:05:44 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:53.541 20:05:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:53.541 20:05:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:53.541 20:05:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:53.541 20:05:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:53.541 20:05:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:53.541 20:05:45 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:53.541 20:05:45 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:53.541 20:05:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:53.800 20:05:45 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:53.800 20:05:45 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:53.800 20:05:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:53.800 20:05:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:53.800 20:05:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:54.059 20:05:45 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:54.059 20:05:45 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:54.059 20:05:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:54.059 20:05:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:54.059 20:05:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:54.059 20:05:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:54.059 20:05:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:54.059 20:05:45 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:54.059 20:05:45 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:54.059 20:05:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:54.318 20:05:45 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:54.318 20:05:45 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:54.318 20:05:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:54.577 20:05:45 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:54.577 20:05:45 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UKxnFvovW9 00:29:54.577 20:05:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UKxnFvovW9 00:29:54.577 20:05:46 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pyKTq3X262 00:29:54.577 20:05:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pyKTq3X262 00:29:54.836 20:05:46 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:54.836 20:05:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:55.095 nvme0n1 00:29:55.095 20:05:46 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:55.095 20:05:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:55.354 20:05:46 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:55.354 "subsystems": [ 00:29:55.354 { 00:29:55.354 "subsystem": "keyring", 00:29:55.354 "config": [ 00:29:55.354 { 00:29:55.354 "method": "keyring_file_add_key", 00:29:55.354 "params": { 00:29:55.354 "name": "key0", 00:29:55.354 "path": "/tmp/tmp.UKxnFvovW9" 00:29:55.354 } 00:29:55.354 }, 00:29:55.354 { 00:29:55.354 "method": "keyring_file_add_key", 00:29:55.354 "params": { 00:29:55.354 "name": "key1", 00:29:55.354 "path": "/tmp/tmp.pyKTq3X262" 00:29:55.354 } 00:29:55.354 } 00:29:55.354 ] 00:29:55.354 }, 00:29:55.354 { 00:29:55.354 "subsystem": "iobuf", 00:29:55.354 "config": [ 00:29:55.354 { 00:29:55.354 "method": "iobuf_set_options", 00:29:55.354 "params": { 00:29:55.354 "small_pool_count": 8192, 00:29:55.354 "large_pool_count": 1024, 00:29:55.354 "small_bufsize": 8192, 00:29:55.354 "large_bufsize": 135168 00:29:55.354 } 00:29:55.354 } 00:29:55.354 ] 00:29:55.354 }, 00:29:55.354 { 00:29:55.354 "subsystem": "sock", 00:29:55.354 "config": [ 00:29:55.354 { 00:29:55.354 "method": "sock_set_default_impl", 00:29:55.354 "params": { 00:29:55.354 "impl_name": "posix" 00:29:55.354 } 00:29:55.354 }, 00:29:55.354 { 00:29:55.354 "method": "sock_impl_set_options", 00:29:55.354 "params": { 00:29:55.354 "impl_name": "ssl", 00:29:55.354 "recv_buf_size": 4096, 00:29:55.354 "send_buf_size": 4096, 00:29:55.354 "enable_recv_pipe": true, 00:29:55.354 "enable_quickack": false, 00:29:55.354 "enable_placement_id": 0, 00:29:55.354 "enable_zerocopy_send_server": true, 00:29:55.354 "enable_zerocopy_send_client": false, 00:29:55.354 "zerocopy_threshold": 0, 00:29:55.354 "tls_version": 0, 00:29:55.354 "enable_ktls": false 00:29:55.354 } 00:29:55.354 }, 00:29:55.354 { 00:29:55.354 "method": "sock_impl_set_options", 00:29:55.354 "params": { 00:29:55.354 "impl_name": "posix", 00:29:55.354 "recv_buf_size": 2097152, 00:29:55.354 "send_buf_size": 2097152, 00:29:55.354 "enable_recv_pipe": true, 00:29:55.354 "enable_quickack": false, 00:29:55.354 "enable_placement_id": 0, 00:29:55.354 "enable_zerocopy_send_server": true, 00:29:55.354 "enable_zerocopy_send_client": false, 00:29:55.354 "zerocopy_threshold": 0, 00:29:55.354 "tls_version": 0, 00:29:55.354 "enable_ktls": false 00:29:55.354 } 00:29:55.354 } 00:29:55.354 ] 00:29:55.354 }, 00:29:55.354 { 00:29:55.354 "subsystem": "vmd", 00:29:55.354 "config": [] 00:29:55.354 }, 00:29:55.354 { 00:29:55.354 "subsystem": "accel", 00:29:55.354 "config": [ 00:29:55.354 { 00:29:55.354 "method": "accel_set_options", 00:29:55.354 "params": { 00:29:55.354 "small_cache_size": 128, 00:29:55.354 "large_cache_size": 16, 00:29:55.354 "task_count": 2048, 00:29:55.354 "sequence_count": 2048, 00:29:55.354 "buf_count": 2048 00:29:55.354 } 00:29:55.354 } 00:29:55.354 ] 00:29:55.354 }, 00:29:55.354 { 00:29:55.354 "subsystem": "bdev", 00:29:55.354 "config": [ 00:29:55.354 { 00:29:55.354 "method": "bdev_set_options", 00:29:55.354 "params": { 00:29:55.354 "bdev_io_pool_size": 65535, 00:29:55.354 "bdev_io_cache_size": 256, 00:29:55.354 "bdev_auto_examine": true, 00:29:55.354 "iobuf_small_cache_size": 128, 00:29:55.354 "iobuf_large_cache_size": 16 00:29:55.354 } 00:29:55.354 }, 00:29:55.354 { 00:29:55.354 "method": "bdev_raid_set_options", 00:29:55.354 "params": { 00:29:55.354 "process_window_size_kb": 1024, 00:29:55.354 "process_max_bandwidth_mb_sec": 0 00:29:55.354 } 00:29:55.354 }, 00:29:55.354 { 00:29:55.354 "method": "bdev_iscsi_set_options", 00:29:55.354 "params": { 00:29:55.354 "timeout_sec": 30 00:29:55.354 } 00:29:55.355 }, 00:29:55.355 { 00:29:55.355 "method": "bdev_nvme_set_options", 00:29:55.355 "params": { 00:29:55.355 "action_on_timeout": "none", 00:29:55.355 "timeout_us": 0, 00:29:55.355 "timeout_admin_us": 0, 00:29:55.355 "keep_alive_timeout_ms": 10000, 00:29:55.355 "arbitration_burst": 0, 00:29:55.355 "low_priority_weight": 0, 00:29:55.355 "medium_priority_weight": 0, 00:29:55.355 "high_priority_weight": 0, 00:29:55.355 "nvme_adminq_poll_period_us": 10000, 00:29:55.355 "nvme_ioq_poll_period_us": 0, 00:29:55.355 "io_queue_requests": 512, 00:29:55.355 "delay_cmd_submit": true, 00:29:55.355 "transport_retry_count": 4, 00:29:55.355 "bdev_retry_count": 3, 00:29:55.355 "transport_ack_timeout": 0, 00:29:55.355 "ctrlr_loss_timeout_sec": 0, 00:29:55.355 "reconnect_delay_sec": 0, 00:29:55.355 "fast_io_fail_timeout_sec": 0, 00:29:55.355 "disable_auto_failback": false, 00:29:55.355 "generate_uuids": false, 00:29:55.355 "transport_tos": 0, 00:29:55.355 "nvme_error_stat": false, 00:29:55.355 "rdma_srq_size": 0, 00:29:55.355 "io_path_stat": false, 00:29:55.355 "allow_accel_sequence": false, 00:29:55.355 "rdma_max_cq_size": 0, 00:29:55.355 "rdma_cm_event_timeout_ms": 0, 00:29:55.355 "dhchap_digests": [ 00:29:55.355 "sha256", 00:29:55.355 "sha384", 00:29:55.355 "sha512" 00:29:55.355 ], 00:29:55.355 "dhchap_dhgroups": [ 00:29:55.355 "null", 00:29:55.355 "ffdhe2048", 00:29:55.355 "ffdhe3072", 00:29:55.355 "ffdhe4096", 00:29:55.355 "ffdhe6144", 00:29:55.355 "ffdhe8192" 00:29:55.355 ] 00:29:55.355 } 00:29:55.355 }, 00:29:55.355 { 00:29:55.355 "method": "bdev_nvme_attach_controller", 00:29:55.355 "params": { 00:29:55.355 "name": "nvme0", 00:29:55.355 "trtype": "TCP", 00:29:55.355 "adrfam": "IPv4", 00:29:55.355 "traddr": "127.0.0.1", 00:29:55.355 "trsvcid": "4420", 00:29:55.355 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:55.355 "prchk_reftag": false, 00:29:55.355 "prchk_guard": false, 00:29:55.355 "ctrlr_loss_timeout_sec": 0, 00:29:55.355 "reconnect_delay_sec": 0, 00:29:55.355 "fast_io_fail_timeout_sec": 0, 00:29:55.355 "psk": "key0", 00:29:55.355 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:55.355 "hdgst": false, 00:29:55.355 "ddgst": false 00:29:55.355 } 00:29:55.355 }, 00:29:55.355 { 00:29:55.355 "method": "bdev_nvme_set_hotplug", 00:29:55.355 "params": { 00:29:55.355 "period_us": 100000, 00:29:55.355 "enable": false 00:29:55.355 } 00:29:55.355 }, 00:29:55.355 { 00:29:55.355 "method": "bdev_wait_for_examine" 00:29:55.355 } 00:29:55.355 ] 00:29:55.355 }, 00:29:55.355 { 00:29:55.355 "subsystem": "nbd", 00:29:55.355 "config": [] 00:29:55.355 } 00:29:55.355 ] 00:29:55.355 }' 00:29:55.355 20:05:46 keyring_file -- keyring/file.sh@114 -- # killprocess 2246765 00:29:55.355 20:05:46 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2246765 ']' 00:29:55.355 20:05:46 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2246765 00:29:55.355 20:05:46 keyring_file -- common/autotest_common.sh@955 -- # uname 00:29:55.355 20:05:46 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:55.355 20:05:46 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2246765 00:29:55.355 20:05:46 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:55.355 20:05:46 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:55.355 20:05:46 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2246765' 00:29:55.355 killing process with pid 2246765 00:29:55.355 20:05:46 keyring_file -- common/autotest_common.sh@969 -- # kill 2246765 00:29:55.355 Received shutdown signal, test time was about 1.000000 seconds 00:29:55.355 00:29:55.355 Latency(us) 00:29:55.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.355 =================================================================================================================== 00:29:55.355 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:55.355 20:05:46 keyring_file -- common/autotest_common.sh@974 -- # wait 2246765 00:29:55.615 20:05:47 keyring_file -- keyring/file.sh@117 -- # bperfpid=2248358 00:29:55.615 20:05:47 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2248358 /var/tmp/bperf.sock 00:29:55.615 20:05:47 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2248358 ']' 00:29:55.615 20:05:47 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:55.615 20:05:47 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:55.615 20:05:47 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:55.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:55.615 20:05:47 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:55.615 20:05:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:55.615 20:05:47 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:55.615 20:05:47 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:55.615 "subsystems": [ 00:29:55.615 { 00:29:55.615 "subsystem": "keyring", 00:29:55.615 "config": [ 00:29:55.615 { 00:29:55.615 "method": "keyring_file_add_key", 00:29:55.615 "params": { 00:29:55.615 "name": "key0", 00:29:55.615 "path": "/tmp/tmp.UKxnFvovW9" 00:29:55.615 } 00:29:55.615 }, 00:29:55.615 { 00:29:55.615 "method": "keyring_file_add_key", 00:29:55.615 "params": { 00:29:55.615 "name": "key1", 00:29:55.615 "path": "/tmp/tmp.pyKTq3X262" 00:29:55.615 } 00:29:55.615 } 00:29:55.615 ] 00:29:55.615 }, 00:29:55.615 { 00:29:55.615 "subsystem": "iobuf", 00:29:55.615 "config": [ 00:29:55.615 { 00:29:55.615 "method": "iobuf_set_options", 00:29:55.615 "params": { 00:29:55.615 "small_pool_count": 8192, 00:29:55.615 "large_pool_count": 1024, 00:29:55.615 "small_bufsize": 8192, 00:29:55.615 "large_bufsize": 135168 00:29:55.615 } 00:29:55.615 } 00:29:55.615 ] 00:29:55.615 }, 00:29:55.615 { 00:29:55.615 "subsystem": "sock", 00:29:55.615 "config": [ 00:29:55.615 { 00:29:55.615 "method": "sock_set_default_impl", 00:29:55.615 "params": { 00:29:55.615 "impl_name": "posix" 00:29:55.615 } 00:29:55.615 }, 00:29:55.615 { 00:29:55.615 "method": "sock_impl_set_options", 00:29:55.615 "params": { 00:29:55.615 "impl_name": "ssl", 00:29:55.615 "recv_buf_size": 4096, 00:29:55.615 "send_buf_size": 4096, 00:29:55.615 "enable_recv_pipe": true, 00:29:55.615 "enable_quickack": false, 00:29:55.615 "enable_placement_id": 0, 00:29:55.615 "enable_zerocopy_send_server": true, 00:29:55.615 "enable_zerocopy_send_client": false, 00:29:55.615 "zerocopy_threshold": 0, 00:29:55.615 "tls_version": 0, 00:29:55.615 "enable_ktls": false 00:29:55.615 } 00:29:55.615 }, 00:29:55.615 { 00:29:55.615 "method": "sock_impl_set_options", 00:29:55.615 "params": { 00:29:55.615 "impl_name": "posix", 00:29:55.615 "recv_buf_size": 2097152, 00:29:55.615 "send_buf_size": 2097152, 00:29:55.615 "enable_recv_pipe": true, 00:29:55.615 "enable_quickack": false, 00:29:55.615 "enable_placement_id": 0, 00:29:55.615 "enable_zerocopy_send_server": true, 00:29:55.615 "enable_zerocopy_send_client": false, 00:29:55.615 "zerocopy_threshold": 0, 00:29:55.615 "tls_version": 0, 00:29:55.615 "enable_ktls": false 00:29:55.615 } 00:29:55.615 } 00:29:55.615 ] 00:29:55.615 }, 00:29:55.615 { 00:29:55.615 "subsystem": "vmd", 00:29:55.615 "config": [] 00:29:55.615 }, 00:29:55.615 { 00:29:55.615 "subsystem": "accel", 00:29:55.615 "config": [ 00:29:55.615 { 00:29:55.615 "method": "accel_set_options", 00:29:55.615 "params": { 00:29:55.615 "small_cache_size": 128, 00:29:55.615 "large_cache_size": 16, 00:29:55.615 "task_count": 2048, 00:29:55.615 "sequence_count": 2048, 00:29:55.615 "buf_count": 2048 00:29:55.615 } 00:29:55.615 } 00:29:55.615 ] 00:29:55.615 }, 00:29:55.615 { 00:29:55.615 "subsystem": "bdev", 00:29:55.615 "config": [ 00:29:55.615 { 00:29:55.615 "method": "bdev_set_options", 00:29:55.615 "params": { 00:29:55.615 "bdev_io_pool_size": 65535, 00:29:55.615 "bdev_io_cache_size": 256, 00:29:55.615 "bdev_auto_examine": true, 00:29:55.615 "iobuf_small_cache_size": 128, 00:29:55.615 "iobuf_large_cache_size": 16 00:29:55.615 } 00:29:55.615 }, 00:29:55.615 { 00:29:55.615 "method": "bdev_raid_set_options", 00:29:55.615 "params": { 00:29:55.615 "process_window_size_kb": 1024, 00:29:55.615 "process_max_bandwidth_mb_sec": 0 00:29:55.615 } 00:29:55.615 }, 00:29:55.615 { 00:29:55.615 "method": "bdev_iscsi_set_options", 00:29:55.615 "params": { 00:29:55.615 "timeout_sec": 30 00:29:55.615 } 00:29:55.615 }, 00:29:55.615 { 00:29:55.615 "method": "bdev_nvme_set_options", 00:29:55.615 "params": { 00:29:55.615 "action_on_timeout": "none", 00:29:55.615 "timeout_us": 0, 00:29:55.615 "timeout_admin_us": 0, 00:29:55.615 "keep_alive_timeout_ms": 10000, 00:29:55.615 "arbitration_burst": 0, 00:29:55.615 "low_priority_weight": 0, 00:29:55.615 "medium_priority_weight": 0, 00:29:55.615 "high_priority_weight": 0, 00:29:55.615 "nvme_adminq_poll_period_us": 10000, 00:29:55.615 "nvme_ioq_poll_period_us": 0, 00:29:55.615 "io_queue_requests": 512, 00:29:55.615 "delay_cmd_submit": true, 00:29:55.615 "transport_retry_count": 4, 00:29:55.615 "bdev_retry_count": 3, 00:29:55.615 "transport_ack_timeout": 0, 00:29:55.615 "ctrlr_loss_timeout_sec": 0, 00:29:55.615 "reconnect_delay_sec": 0, 00:29:55.616 "fast_io_fail_timeout_sec": 0, 00:29:55.616 "disable_auto_failback": false, 00:29:55.616 "generate_uuids": false, 00:29:55.616 "transport_tos": 0, 00:29:55.616 "nvme_error_stat": false, 00:29:55.616 "rdma_srq_size": 0, 00:29:55.616 "io_path_stat": false, 00:29:55.616 "allow_accel_sequence": false, 00:29:55.616 "rdma_max_cq_size": 0, 00:29:55.616 "rdma_cm_event_timeout_ms": 0, 00:29:55.616 "dhchap_digests": [ 00:29:55.616 "sha256", 00:29:55.616 "sha384", 00:29:55.616 "sha512" 00:29:55.616 ], 00:29:55.616 "dhchap_dhgroups": [ 00:29:55.616 "null", 00:29:55.616 "ffdhe2048", 00:29:55.616 "ffdhe3072", 00:29:55.616 "ffdhe4096", 00:29:55.616 "ffdhe6144", 00:29:55.616 "ffdhe8192" 00:29:55.616 ] 00:29:55.616 } 00:29:55.616 }, 00:29:55.616 { 00:29:55.616 "method": "bdev_nvme_attach_controller", 00:29:55.616 "params": { 00:29:55.616 "name": "nvme0", 00:29:55.616 "trtype": "TCP", 00:29:55.616 "adrfam": "IPv4", 00:29:55.616 "traddr": "127.0.0.1", 00:29:55.616 "trsvcid": "4420", 00:29:55.616 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:55.616 "prchk_reftag": false, 00:29:55.616 "prchk_guard": false, 00:29:55.616 "ctrlr_loss_timeout_sec": 0, 00:29:55.616 "reconnect_delay_sec": 0, 00:29:55.616 "fast_io_fail_timeout_sec": 0, 00:29:55.616 "psk": "key0", 00:29:55.616 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:55.616 "hdgst": false, 00:29:55.616 "ddgst": false 00:29:55.616 } 00:29:55.616 }, 00:29:55.616 { 00:29:55.616 "method": "bdev_nvme_set_hotplug", 00:29:55.616 "params": { 00:29:55.616 "period_us": 100000, 00:29:55.616 "enable": false 00:29:55.616 } 00:29:55.616 }, 00:29:55.616 { 00:29:55.616 "method": "bdev_wait_for_examine" 00:29:55.616 } 00:29:55.616 ] 00:29:55.616 }, 00:29:55.616 { 00:29:55.616 "subsystem": "nbd", 00:29:55.616 "config": [] 00:29:55.616 } 00:29:55.616 ] 00:29:55.616 }' 00:29:55.616 [2024-07-24 20:05:47.067642] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:29:55.616 [2024-07-24 20:05:47.067693] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248358 ] 00:29:55.616 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.616 [2024-07-24 20:05:47.121001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.616 [2024-07-24 20:05:47.192794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.875 [2024-07-24 20:05:47.352162] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:56.442 20:05:47 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:56.442 20:05:47 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:29:56.442 20:05:47 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:56.442 20:05:47 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:56.442 20:05:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:56.700 20:05:48 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:56.700 20:05:48 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:56.700 20:05:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:56.700 20:05:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:56.700 20:05:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:56.700 20:05:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:56.700 20:05:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:56.700 20:05:48 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:56.700 20:05:48 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:56.700 20:05:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:56.700 20:05:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:56.700 20:05:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:56.700 20:05:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:56.700 20:05:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:56.958 20:05:48 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:56.958 20:05:48 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:56.958 20:05:48 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:56.958 20:05:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:57.215 20:05:48 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:57.216 20:05:48 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:57.216 20:05:48 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.UKxnFvovW9 /tmp/tmp.pyKTq3X262 00:29:57.216 20:05:48 keyring_file -- keyring/file.sh@20 -- # killprocess 2248358 00:29:57.216 20:05:48 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2248358 ']' 00:29:57.216 20:05:48 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2248358 00:29:57.216 20:05:48 keyring_file -- common/autotest_common.sh@955 -- # uname 00:29:57.216 20:05:48 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:57.216 20:05:48 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2248358 00:29:57.216 20:05:48 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:57.216 20:05:48 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:57.216 20:05:48 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2248358' 00:29:57.216 killing process with pid 2248358 00:29:57.216 20:05:48 keyring_file -- common/autotest_common.sh@969 -- # kill 2248358 00:29:57.216 Received shutdown signal, test time was about 1.000000 seconds 00:29:57.216 00:29:57.216 Latency(us) 00:29:57.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.216 =================================================================================================================== 00:29:57.216 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:57.216 20:05:48 keyring_file -- common/autotest_common.sh@974 -- # wait 2248358 00:29:57.474 20:05:48 keyring_file -- keyring/file.sh@21 -- # killprocess 2246687 00:29:57.474 20:05:48 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2246687 ']' 00:29:57.474 20:05:48 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2246687 00:29:57.474 20:05:48 keyring_file -- common/autotest_common.sh@955 -- # uname 00:29:57.474 20:05:48 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:57.474 20:05:48 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2246687 00:29:57.474 20:05:48 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:57.474 20:05:48 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:57.474 20:05:48 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2246687' 00:29:57.474 killing process with pid 2246687 00:29:57.474 20:05:48 keyring_file -- common/autotest_common.sh@969 -- # kill 2246687 00:29:57.474 [2024-07-24 20:05:48.861790] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:57.474 20:05:48 keyring_file -- common/autotest_common.sh@974 -- # wait 2246687 00:29:57.732 00:29:57.732 real 0m12.086s 00:29:57.732 user 0m28.213s 00:29:57.732 sys 0m2.744s 00:29:57.732 20:05:49 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:57.732 20:05:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:57.732 ************************************ 00:29:57.732 END TEST keyring_file 00:29:57.732 ************************************ 00:29:57.732 20:05:49 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:29:57.732 20:05:49 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:57.732 20:05:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:57.732 20:05:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:57.732 20:05:49 -- common/autotest_common.sh@10 -- # set +x 00:29:57.732 ************************************ 00:29:57.732 START TEST keyring_linux 00:29:57.732 ************************************ 00:29:57.732 20:05:49 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:57.732 * Looking for test storage... 00:29:57.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:57.732 20:05:49 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:57.991 20:05:49 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.991 20:05:49 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:57.991 20:05:49 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.991 20:05:49 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.991 20:05:49 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.991 20:05:49 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.991 20:05:49 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.991 20:05:49 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.991 20:05:49 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.991 20:05:49 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.991 20:05:49 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.991 20:05:49 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.991 20:05:49 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.991 20:05:49 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.991 20:05:49 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.991 20:05:49 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.991 20:05:49 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.992 20:05:49 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.992 20:05:49 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.992 20:05:49 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.992 20:05:49 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.992 20:05:49 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.992 20:05:49 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.992 20:05:49 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:57.992 20:05:49 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:57.992 20:05:49 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:57.992 20:05:49 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:57.992 20:05:49 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:57.992 20:05:49 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:57.992 20:05:49 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:57.992 20:05:49 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:57.992 /tmp/:spdk-test:key0 00:29:57.992 20:05:49 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:57.992 20:05:49 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:57.992 20:05:49 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:57.992 /tmp/:spdk-test:key1 00:29:57.992 20:05:49 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2248766 00:29:57.992 20:05:49 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2248766 00:29:57.992 20:05:49 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:57.992 20:05:49 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2248766 ']' 00:29:57.992 20:05:49 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.992 20:05:49 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:57.992 20:05:49 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.992 20:05:49 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:57.992 20:05:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:57.992 [2024-07-24 20:05:49.469396] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:29:57.992 [2024-07-24 20:05:49.469440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248766 ] 00:29:57.992 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.992 [2024-07-24 20:05:49.521363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.250 [2024-07-24 20:05:49.603063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.817 20:05:50 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:58.817 20:05:50 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:29:58.817 20:05:50 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:58.817 20:05:50 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.817 20:05:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:58.817 [2024-07-24 20:05:50.281459] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.817 null0 00:29:58.817 [2024-07-24 20:05:50.313516] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:58.817 [2024-07-24 20:05:50.313837] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:58.817 20:05:50 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.817 20:05:50 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:58.817 202011914 00:29:58.817 20:05:50 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:58.817 502110628 00:29:58.817 20:05:50 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2249002 00:29:58.817 20:05:50 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2249002 /var/tmp/bperf.sock 00:29:58.817 20:05:50 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2249002 ']' 00:29:58.817 20:05:50 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:58.817 20:05:50 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:58.817 20:05:50 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:58.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:58.817 20:05:50 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:58.817 20:05:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:58.817 20:05:50 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:58.817 [2024-07-24 20:05:50.382752] Starting SPDK v24.09-pre git sha1 3bc1795d3 / DPDK 24.03.0 initialization... 00:29:58.817 [2024-07-24 20:05:50.382797] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2249002 ] 00:29:58.817 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.075 [2024-07-24 20:05:50.435384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.076 [2024-07-24 20:05:50.514705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.643 20:05:51 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:59.643 20:05:51 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:29:59.643 20:05:51 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:59.643 20:05:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:59.902 20:05:51 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:59.902 20:05:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:00.160 20:05:51 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:00.160 20:05:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:00.160 [2024-07-24 20:05:51.735405] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:00.418 nvme0n1 00:30:00.418 20:05:51 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:30:00.418 20:05:51 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:30:00.418 20:05:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:00.418 20:05:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:00.418 20:05:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:00.418 20:05:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:00.418 20:05:52 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:30:00.418 20:05:52 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:00.418 20:05:52 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:30:00.418 20:05:52 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:30:00.418 20:05:52 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:00.418 20:05:52 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:30:00.418 20:05:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:00.676 20:05:52 keyring_linux -- keyring/linux.sh@25 -- # sn=202011914 00:30:00.676 20:05:52 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:30:00.676 20:05:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:00.676 20:05:52 keyring_linux -- keyring/linux.sh@26 -- # [[ 202011914 == \2\0\2\0\1\1\9\1\4 ]] 00:30:00.676 20:05:52 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 202011914 00:30:00.676 20:05:52 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:30:00.676 20:05:52 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:00.934 Running I/O for 1 seconds... 00:30:01.879 00:30:01.879 Latency(us) 00:30:01.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.879 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:01.879 nvme0n1 : 1.03 3192.55 12.47 0.00 0.00 39571.09 8719.14 49237.48 00:30:01.879 =================================================================================================================== 00:30:01.879 Total : 3192.55 12.47 0.00 0.00 39571.09 8719.14 49237.48 00:30:01.879 0 00:30:01.879 20:05:53 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:01.879 20:05:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:02.140 20:05:53 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:30:02.140 20:05:53 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:30:02.140 20:05:53 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:02.140 20:05:53 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:02.140 20:05:53 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:02.140 20:05:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:02.140 20:05:53 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:30:02.140 20:05:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:02.140 20:05:53 keyring_linux -- keyring/linux.sh@23 -- # return 00:30:02.140 20:05:53 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:02.140 20:05:53 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:30:02.140 20:05:53 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:02.140 20:05:53 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:30:02.140 20:05:53 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:02.140 20:05:53 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:30:02.140 20:05:53 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:02.140 20:05:53 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:02.140 20:05:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:02.399 [2024-07-24 20:05:53.833207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17aa770 (107): Transport endpoint is not connected 00:30:02.399 [2024-07-24 20:05:53.833223] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:02.399 [2024-07-24 20:05:53.834203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17aa770 (9): Bad file descriptor 00:30:02.399 [2024-07-24 20:05:53.835203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:02.399 [2024-07-24 20:05:53.835212] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:02.399 [2024-07-24 20:05:53.835220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:02.399 request: 00:30:02.399 { 00:30:02.399 "name": "nvme0", 00:30:02.399 "trtype": "tcp", 00:30:02.399 "traddr": "127.0.0.1", 00:30:02.399 "adrfam": "ipv4", 00:30:02.399 "trsvcid": "4420", 00:30:02.399 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:02.399 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:02.399 "prchk_reftag": false, 00:30:02.399 "prchk_guard": false, 00:30:02.399 "hdgst": false, 00:30:02.399 "ddgst": false, 00:30:02.399 "psk": ":spdk-test:key1", 00:30:02.399 "method": "bdev_nvme_attach_controller", 00:30:02.399 "req_id": 1 00:30:02.399 } 00:30:02.399 Got JSON-RPC error response 00:30:02.399 response: 00:30:02.399 { 00:30:02.399 "code": -5, 00:30:02.399 "message": "Input/output error" 00:30:02.399 } 00:30:02.399 20:05:53 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:30:02.399 20:05:53 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:02.399 20:05:53 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:02.399 20:05:53 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:02.399 20:05:53 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:30:02.399 20:05:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:02.399 20:05:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:30:02.399 20:05:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:30:02.399 20:05:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:30:02.399 20:05:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:02.399 20:05:53 keyring_linux -- keyring/linux.sh@33 -- # sn=202011914 00:30:02.399 20:05:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 202011914 00:30:02.399 1 links removed 00:30:02.399 20:05:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:02.399 20:05:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:30:02.399 20:05:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:30:02.399 20:05:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:30:02.399 20:05:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:30:02.399 20:05:53 keyring_linux -- keyring/linux.sh@33 -- # sn=502110628 00:30:02.399 20:05:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 502110628 00:30:02.399 1 links removed 00:30:02.399 20:05:53 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2249002 00:30:02.399 20:05:53 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2249002 ']' 00:30:02.399 20:05:53 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2249002 00:30:02.399 20:05:53 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:30:02.399 20:05:53 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:02.399 20:05:53 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2249002 00:30:02.399 20:05:53 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:02.399 20:05:53 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:02.399 20:05:53 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2249002' 00:30:02.399 killing process with pid 2249002 00:30:02.399 20:05:53 keyring_linux -- common/autotest_common.sh@969 -- # kill 2249002 00:30:02.399 Received shutdown signal, test time was about 1.000000 seconds 00:30:02.399 00:30:02.399 Latency(us) 00:30:02.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.399 =================================================================================================================== 00:30:02.399 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:02.399 20:05:53 keyring_linux -- common/autotest_common.sh@974 -- # wait 2249002 00:30:02.658 20:05:54 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2248766 00:30:02.658 20:05:54 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2248766 ']' 00:30:02.658 20:05:54 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2248766 00:30:02.658 20:05:54 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:30:02.658 20:05:54 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:02.658 20:05:54 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2248766 00:30:02.658 20:05:54 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:02.658 20:05:54 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:02.658 20:05:54 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2248766' 00:30:02.658 killing process with pid 2248766 00:30:02.658 20:05:54 keyring_linux -- common/autotest_common.sh@969 -- # kill 2248766 00:30:02.658 20:05:54 keyring_linux -- common/autotest_common.sh@974 -- # wait 2248766 00:30:02.918 00:30:02.918 real 0m5.205s 00:30:02.918 user 0m9.117s 00:30:02.918 sys 0m1.174s 00:30:02.918 20:05:54 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:02.918 20:05:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:02.918 ************************************ 00:30:02.918 END TEST keyring_linux 00:30:02.918 ************************************ 00:30:02.918 20:05:54 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:02.918 20:05:54 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:30:02.918 20:05:54 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:30:02.918 20:05:54 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:30:02.918 20:05:54 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:30:02.918 20:05:54 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:02.918 20:05:54 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:02.918 20:05:54 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:02.918 20:05:54 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:30:02.918 20:05:54 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:02.918 20:05:54 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:30:02.918 20:05:54 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:02.918 20:05:54 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:02.918 20:05:54 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:02.918 20:05:54 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:30:02.918 20:05:54 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:30:02.918 20:05:54 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:30:02.918 20:05:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:02.918 20:05:54 -- common/autotest_common.sh@10 -- # set +x 00:30:02.918 20:05:54 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:30:02.918 20:05:54 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:02.918 20:05:54 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:02.918 20:05:54 -- common/autotest_common.sh@10 -- # set +x 00:30:08.233 INFO: APP EXITING 00:30:08.233 INFO: killing all VMs 00:30:08.233 INFO: killing vhost app 00:30:08.233 INFO: EXIT DONE 00:30:09.667 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:30:09.667 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:30:09.667 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:30:09.667 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:30:09.667 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:30:09.667 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:30:09.667 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:30:09.927 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:30:09.927 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:30:09.927 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:30:09.927 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:30:09.927 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:30:09.927 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:30:09.927 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:30:09.927 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:30:09.927 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:30:09.927 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:30:12.468 Cleaning 00:30:12.468 Removing: /var/run/dpdk/spdk0/config 00:30:12.468 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:12.468 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:12.468 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:12.468 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:12.468 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:12.468 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:12.468 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:12.468 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:12.468 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:12.468 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:12.468 Removing: /var/run/dpdk/spdk1/config 00:30:12.468 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:12.468 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:12.468 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:12.468 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:12.468 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:12.468 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:12.468 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:12.468 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:12.468 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:12.468 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:12.468 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:12.468 Removing: /var/run/dpdk/spdk2/config 00:30:12.468 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:12.468 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:12.468 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:12.468 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:12.468 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:12.468 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:12.468 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:12.468 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:12.468 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:12.468 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:12.468 Removing: /var/run/dpdk/spdk3/config 00:30:12.468 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:12.468 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:12.468 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:12.468 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:12.468 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:12.468 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:12.468 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:12.468 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:12.468 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:12.468 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:12.468 Removing: /var/run/dpdk/spdk4/config 00:30:12.468 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:12.468 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:12.468 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:12.468 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:12.468 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:12.468 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:12.468 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:12.468 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:12.728 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:12.728 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:12.728 Removing: /dev/shm/bdev_svc_trace.1 00:30:12.728 Removing: /dev/shm/nvmf_trace.0 00:30:12.728 Removing: /dev/shm/spdk_tgt_trace.pid1869660 00:30:12.728 Removing: /var/run/dpdk/spdk0 00:30:12.728 Removing: /var/run/dpdk/spdk1 00:30:12.728 Removing: /var/run/dpdk/spdk2 00:30:12.728 Removing: /var/run/dpdk/spdk3 00:30:12.728 Removing: /var/run/dpdk/spdk4 00:30:12.728 Removing: /var/run/dpdk/spdk_pid1867496 00:30:12.728 Removing: /var/run/dpdk/spdk_pid1868589 00:30:12.728 Removing: /var/run/dpdk/spdk_pid1869660 00:30:12.728 Removing: /var/run/dpdk/spdk_pid1870289 00:30:12.728 Removing: /var/run/dpdk/spdk_pid1871241 00:30:12.728 Removing: /var/run/dpdk/spdk_pid1871477 00:30:12.728 Removing: /var/run/dpdk/spdk_pid1872450 00:30:12.728 Removing: /var/run/dpdk/spdk_pid1872601 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1872805 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1874329 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1875596 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1875872 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1876155 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1876462 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1876753 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1877009 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1877257 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1877533 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1878446 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1881330 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1881751 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1882009 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1882165 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1882517 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1882743 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1883185 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1883250 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1883516 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1883746 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1884004 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1884020 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1884571 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1884817 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1885105 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1888888 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1893562 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1903572 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1904254 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1908518 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1908771 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1913030 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1918835 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1921504 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1931927 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1941475 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1943182 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1944104 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1960950 00:30:12.729 Removing: /var/run/dpdk/spdk_pid1964786 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2009256 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2014582 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2020641 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2026644 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2026646 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2027561 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2028471 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2029270 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2029857 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2029859 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2030092 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2030236 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2030321 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2031122 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2031940 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2032853 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2033337 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2033500 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2033771 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2034930 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2036445 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2044610 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2068994 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2074005 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2075611 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2077560 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2077710 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2077926 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2078164 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2078885 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2080725 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2081717 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2082220 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2084522 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2085036 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2085769 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2089819 00:30:12.729 Removing: /var/run/dpdk/spdk_pid2099780 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2103809 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2109574 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2110988 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2112618 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2117272 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2121290 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2128799 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2128858 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2133350 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2133579 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2133814 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2134262 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2134273 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2138746 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2139313 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2143646 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2146381 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2151798 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2157132 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2166183 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2173175 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2173177 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2190879 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2191475 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2192168 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2192864 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2193626 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2194321 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2195012 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2195604 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2199962 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2200198 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2206033 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2206311 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2208535 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2216556 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2216569 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2221583 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2223547 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2225626 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2226781 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2228740 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2229817 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2238544 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2239006 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2239682 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2241943 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2242407 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2242876 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2246687 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2246765 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2248358 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2248766 00:30:12.989 Removing: /var/run/dpdk/spdk_pid2249002 00:30:12.989 Clean 00:30:12.989 20:06:04 -- common/autotest_common.sh@1451 -- # return 0 00:30:12.989 20:06:04 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:30:12.989 20:06:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:12.990 20:06:04 -- common/autotest_common.sh@10 -- # set +x 00:30:12.990 20:06:04 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:30:12.990 20:06:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:12.990 20:06:04 -- common/autotest_common.sh@10 -- # set +x 00:30:12.990 20:06:04 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:12.990 20:06:04 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:12.990 20:06:04 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:12.990 20:06:04 -- spdk/autotest.sh@395 -- # hash lcov 00:30:12.990 20:06:04 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:12.990 20:06:04 -- spdk/autotest.sh@397 -- # hostname 00:30:12.990 20:06:04 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:13.249 geninfo: WARNING: invalid characters removed from testname! 00:30:35.200 20:06:24 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:35.770 20:06:27 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:37.679 20:06:29 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:39.588 20:06:30 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:41.495 20:06:32 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:43.403 20:06:34 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:44.789 20:06:36 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:45.051 20:06:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:45.051 20:06:36 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:45.051 20:06:36 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.051 20:06:36 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.052 20:06:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.052 20:06:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.052 20:06:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.052 20:06:36 -- paths/export.sh@5 -- $ export PATH 00:30:45.052 20:06:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.052 20:06:36 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:45.052 20:06:36 -- common/autobuild_common.sh@447 -- $ date +%s 00:30:45.052 20:06:36 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721844396.XXXXXX 00:30:45.052 20:06:36 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721844396.G5TE9a 00:30:45.052 20:06:36 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:30:45.052 20:06:36 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:30:45.052 20:06:36 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:45.052 20:06:36 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:45.052 20:06:36 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:45.052 20:06:36 -- common/autobuild_common.sh@463 -- $ get_config_params 00:30:45.052 20:06:36 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:30:45.052 20:06:36 -- common/autotest_common.sh@10 -- $ set +x 00:30:45.052 20:06:36 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:45.052 20:06:36 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:30:45.052 20:06:36 -- pm/common@17 -- $ local monitor 00:30:45.052 20:06:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:45.052 20:06:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:45.052 20:06:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:45.052 20:06:36 -- pm/common@21 -- $ date +%s 00:30:45.052 20:06:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:45.052 20:06:36 -- pm/common@21 -- $ date +%s 00:30:45.052 20:06:36 -- pm/common@25 -- $ sleep 1 00:30:45.052 20:06:36 -- pm/common@21 -- $ date +%s 00:30:45.052 20:06:36 -- pm/common@21 -- $ date +%s 00:30:45.052 20:06:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721844396 00:30:45.052 20:06:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721844396 00:30:45.052 20:06:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721844396 00:30:45.052 20:06:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721844396 00:30:45.052 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721844396_collect-vmstat.pm.log 00:30:45.052 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721844396_collect-cpu-load.pm.log 00:30:45.052 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721844396_collect-cpu-temp.pm.log 00:30:45.052 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721844396_collect-bmc-pm.bmc.pm.log 00:30:46.041 20:06:37 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:30:46.041 20:06:37 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:30:46.041 20:06:37 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:46.041 20:06:37 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:46.041 20:06:37 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:46.041 20:06:37 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:46.041 20:06:37 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:46.041 20:06:37 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:46.041 20:06:37 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:46.041 20:06:37 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:46.041 20:06:37 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:46.041 20:06:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:46.041 20:06:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:46.041 20:06:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:46.041 20:06:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:46.041 20:06:37 -- pm/common@44 -- $ pid=2259367 00:30:46.041 20:06:37 -- pm/common@50 -- $ kill -TERM 2259367 00:30:46.041 20:06:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:46.041 20:06:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:46.041 20:06:37 -- pm/common@44 -- $ pid=2259368 00:30:46.041 20:06:37 -- pm/common@50 -- $ kill -TERM 2259368 00:30:46.041 20:06:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:46.041 20:06:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:46.041 20:06:37 -- pm/common@44 -- $ pid=2259370 00:30:46.041 20:06:37 -- pm/common@50 -- $ kill -TERM 2259370 00:30:46.041 20:06:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:46.041 20:06:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:46.041 20:06:37 -- pm/common@44 -- $ pid=2259398 00:30:46.041 20:06:37 -- pm/common@50 -- $ sudo -E kill -TERM 2259398 00:30:46.041 + [[ -n 1763716 ]] 00:30:46.041 + sudo kill 1763716 00:30:46.051 [Pipeline] } 00:30:46.069 [Pipeline] // stage 00:30:46.074 [Pipeline] } 00:30:46.093 [Pipeline] // timeout 00:30:46.098 [Pipeline] } 00:30:46.116 [Pipeline] // catchError 00:30:46.122 [Pipeline] } 00:30:46.141 [Pipeline] // wrap 00:30:46.148 [Pipeline] } 00:30:46.162 [Pipeline] // catchError 00:30:46.170 [Pipeline] stage 00:30:46.172 [Pipeline] { (Epilogue) 00:30:46.186 [Pipeline] catchError 00:30:46.187 [Pipeline] { 00:30:46.197 [Pipeline] echo 00:30:46.198 Cleanup processes 00:30:46.203 [Pipeline] sh 00:30:46.489 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:46.489 2259517 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:46.489 2259768 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:46.504 [Pipeline] sh 00:30:46.792 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:46.792 ++ grep -v 'sudo pgrep' 00:30:46.792 ++ awk '{print $1}' 00:30:46.792 + sudo kill -9 2259517 00:30:46.806 [Pipeline] sh 00:30:47.094 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:57.097 [Pipeline] sh 00:30:57.385 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:57.385 Artifacts sizes are good 00:30:57.400 [Pipeline] archiveArtifacts 00:30:57.408 Archiving artifacts 00:30:57.599 [Pipeline] sh 00:30:57.886 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:57.903 [Pipeline] cleanWs 00:30:57.914 [WS-CLEANUP] Deleting project workspace... 00:30:57.914 [WS-CLEANUP] Deferred wipeout is used... 00:30:57.922 [WS-CLEANUP] done 00:30:57.923 [Pipeline] } 00:30:57.946 [Pipeline] // catchError 00:30:57.957 [Pipeline] sh 00:30:58.240 + logger -p user.info -t JENKINS-CI 00:30:58.250 [Pipeline] } 00:30:58.266 [Pipeline] // stage 00:30:58.272 [Pipeline] } 00:30:58.291 [Pipeline] // node 00:30:58.297 [Pipeline] End of Pipeline 00:30:58.336 Finished: SUCCESS